Machine learning is an amazing new branch of science that focuses on coaxing computers to act without being specifically programmed to do so. The general formula to accomplish this is to give a computer a clear end goal, and then let it fail over and over again until it learns from those mistakes and finally achieves that goal. In other words, the machine learns.
Why is this valuable? We could, theoretically, give computers extremely difficult problems we don’t know how to answer (or don’t have the time for) and they will toil away until the best answer(s) is (are) reached.
While not a particularly practical example, an enterprising programmer created a program which taught itself how to complete a level of Super Mario World, which you can see below:
But going around corners sideways and doing doughnuts is far more entertaining, and Aerospace Controls Laboratory has been more than happy to help make that happen: the lab has created a remotely-controlled car that has taught itself to do a doughnut (that is, spin around an origin point) in the most efficient manner via machine learning.
Once the program had the process of drifting down, it pulled off some even more impressive stunts, like performing the same task on a different surface (which makes this a lot more difficult) and following a moving object while maintaining the slide. Our mouths were agape when that portion of the video popped up:
But why do this at all? Because it’s difficult; professional race car drivers pull this stunt when they win a race, and sometimes even they don’t do it too well, making Aerospace Controls Laboratory’s achievement all the more impressive.
If you’d like even deeper insight into autonomous RC car drifting, Cornell University has a short nine-page paper detailing the process with exuberant depth, which you can find here.
[Source – Youtube]