Exercising the Machine Learning Muscle
From my experience in Marketing, I knew that machine learning was transforming how businesses interact with customers and how the creative process was transforming through better computer vision and adaptive content automation. What the experience at MIT did, though, was confirm my suspicion that machine learning is one of the necessary building blocks for today’s computer scientists. I was struck at the variety of problems it is already tackling. Think of the list of industries represented above. Every one of those individuals are going to go back to their companies (or will start their own company) and will transform how things are done in that domain.
My personal reason for the deep dive was to sharpen some muscles that never were properly exercised. I briefly studied AI 25+ years ago, but machine learning was not emphasized. I went back to my original textbook and was surprised that most everything we covered at MIT was there, but in the late eighties/early nineties, it just didn’t get much airtime in the classroom. Keep in mind, though, that I was a computer scientist, not a statistician or economist. However, as a computer scientist, there were some basic building blocks that everyone was taught that would serve us for a variety of future professional endeavors. Lempel-Ziv-Welch was in our quiver. Boyer-Moore and Knuth-Morris-Pratt were in there too. Dijkstra’s algorithm was known. Merge, Quick and Heap Sorts were known. Fourier transformations were included as well. With these tools, and many more, we were pretty equipped to solve a bunch of professional challenges.
I’m convinced now, though, that every burgeoning computer scientist today should have machine learning algorithms in their toolbox. I think they are a necessity. By 2020, the U.S. Bureau of Labor Statistics predicts that there will be 1.4 million computing jobs but just 400,000 computer science students with the skills to apply for those jobs. Computer programming jobs are growing at a rate that is double the national average, according to a National Association of Colleges and Employers report. If you want to be a part of the future, being a computer scientist looks pretty attractive. If you want to be an effective computer scientist, you should add machine learning tools to your toolbox. The tools should include the Perceptron Algorithm, Stochastic Gradient Descent, Collaborative Filtering, Kernels, Convolutional Neural Nets, Recurrent Neural Nets, K-Means, K-Medoids, Bayesian Networks and many more that I’m not educated enough to recommend.
A computer scientist should start from a position that the machine they are programming can be automated to find solutions. The machine can learn from experience. The machine can predict or detect unwanted collisions, reduce our medical risk, monitor for our safety, or even predict events that we don’t know yet. They are not solely dependent on us to provide all the rules or the answers. In doing so, they will help us tackle some of our biggest problems. Below I’ve listed just a tiny sample of domains that are being transformed just from what I’ve stumbled across down at MIT. None of this is speculation, but a simple reflection on what is already being done.
Consumer Interface: Natural language processing and deep learning architectures are improving the quality of customer interaction and servicing through better and more natural query articulation in a normal, everyday context. Context is fed from IOT sensors that machine learning algorithms can interpret to better understand your environment. Virtual and augmented reality and the ability for individuals to record their own 360 degree experiences through lightweight portable vision systems will feed these algorithms and modify the way we interact with brands and each other. Also, with these better contexts, machines can be assistance for each of us to help us reach our individual goals. Further, the ubiquity of machine assistance will democratize what we now think of as “coding” and allow those of different ages, of different abilities/disabilities, of different education, etc. to participate in programming what is around us.
Health: Biological information processing will lead to better reprogramming of cellular behavior. Better computer vision is already finding precursors to cancer and other diseases in medical imaging that the human eye misses.
Farming: Farmers are using machine learning to maintain healthy yields and quickly predicting or adapting to changes in climate or unforeseen disasters.
Policy: Deep learning is helping to map and measure cultural impacts of policy decisions and influence economic and political points of view.
Accountability: Research is happening to adapt the algorithms to ensure that fair outcomes are achieved that can resist human manipulation, while allowing for accountability. Obviously, these are big theoretical concepts that are at the core of successful general application of machine learning in the real world.