WEDNESDAY October 24, 10:00am - 11:30am | Forum 7
In the late 1960s, Samuel Arthur, research scientist at IBM, first employed the term machine learning to define the ability of computers to learn from data without having been explicitly programmed. Starting from this statement, we briefly introduce in this tutorial the historical background of machine learning (ML) and some significant milestones achieved until now. Then, we outline the current enabling factors of effective ML, i.e., data availability, computational power, and algorithmic improvements, while describing for each of these factors their key value with respect to machine learning. Based on Pedro Domingos statement (“The Master Algorithm”), we cluster current machine learning algorithm structures in 5 classes, describing their main theoretical concepts and achievements.
We then describe how these approaches can solve different types of tasks, i.e., supervised, unsupervised learning… Given the several types of algorithms and tasks, we propose a concrete and simple running example to illustrate the previous and following statements of the tutorial. The current major concerns of ML are outlined (bias/variance dilemma and overfitting, exploration/exploitation…) and serve as basis to introduce the different fields of study of ML, e.g., computational learning theory and statistical learning theory. Although a clear boundary cannot always be drawn between research fields, we attempt to give an overview of the ML landscape research and to describe their proposed solutions (training/testing split, regularization, PAC framework…) with respect to the main concerns mentioned above. Given the previous running example, we describe a common ML workflow and outline the main differences compared to standard engineering or scientific approaches (i.e. iterative work, formalizing the learning vs. the problem, data-driven approach…). We further give insights on deep learning, a popular machine learning approach that achieved significant results these last years. We propose a bottom-up description of the different abstraction levels of ML, i.e., neuron, layer and network architecture. Therefore, a deep description of the maximum likelihood interpretation of logistic regression is used as key component of the explanation, and an overview of optimization methods is given.
As final section of the tutorial, we present an advanced application of ML to reduce power consumption of LTE-Advanced modems in mobile devices (LTE is the Long Term Evolution Advanced cellular radio standard by 3GPP standardization body). After describing the task we want to fulfill using ML and introducing the generic terms of LTE wireless communications from the mobile device perspective, we concretely describe a proposed methodology to evaluate ML performance on the modem at design time (power trajectories). Point by point, we outline our approach to consider the constraints of state of the art embedded system implementations for mobile devices as well as the constraints of cellular radio systems in general during the evaluation of ML algorithms. Finally, as an illustration of the ML workflow depicted in the previous section, we describe the simulation environment we used and highlight specific framework features that enable efficient analysis and scientific workflow (big data, caching, hashing, transactional data synchronization, model selection…).
This Tutorial is appropriate for engineers who want to gain insights into the fundamentals of machine learning and system level performance analysis methodologies for ML algorithms in embedded wireless systems like LTE-Advanced modems for mobile devices.