Some new and exciting Machine Learning news from Google’s “Google Brain” team today. They’ve recently been working on getting computers to do their own Machine Learning configuration with some great results. The latest paper “Learning Transferable Architectures for Scalable Image Recognition” states that they’ve been able to produce a new way of building up ML models that beats even the best human built ones.
Hopefully this explanation doesn’t get too meta for everyone but here goes! Normally in Machine Learning you have to tune and hand develop the “model” that the ML program uses. This might mean having more nodes, less parameters and so on. There are a large amount of ways you can build a Machine Learning system and as such, it’s almost an art form. Usually the more experienced programmers know what to tweak and so get the best results.
In the case of Google Brain, they decided to go all meta and get the machine to do this tweaking work. So now we have a machine learning how to structure and program a machine learning program. Yeah… pretty sweet huh? The way they went about this was to use the Neural Architecture Search (NAS) framework to search for a good architecture on the small CIFAR-10 image classification dataset. They then transferred the best found architecture to the much larger ImageNet dataset.
This allowed them to save a huge amount of time searching for the best architectures to use reducing it’s time from 4 weeks to just 4 days! Once they found the best architecture on the CIFAR-10 dataset they applied it to the ImageNet dataset. With some minor modifications it then resulted in them getting the following results:
On ImageNet, an architecture constructed from the best cell achieves state-of-the-art accuracy of 82.3% top-1 and 96.0% top-5, which is 0.8% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS.
What This Means For Machine Learning
So let’s just stop for a second and think about this. They used computers to design better computer programs that beat the best human programs even while using fewer resources. That to me is pretty damn impressive! I hope you can also see how important this might be in the future too. If this type of thing catches on and can be applied in other fields it could really speed things up. We’re now having machines help other machines to learn on their own. Quite fascinating! If you’re interested in getting into Machine Learning I’ve recently created a great piece on how to build your own Deep Learning computer
Do you think we should be going down this path of having machines teaching other machines or is it too dangerous? Let us know in the comments below!
The benefits include: 1) How to get those silky smooth videos that everyone loves to watch, even if you're new 2) How to fly your drone, from taking off to the most advanced flight modes 3) Clear outlines of how to fly with step-by-step instructional demonstrations and more 4) Why flying indoors often results in new pilots crashing their drone 5) What other great 3rd party apps are out there to get the most out of your drone 6) A huge mistake many pilots make when storing their drone in the car and how to avoid it 7) How to do all of these things whilst flying safely and within your countries laws.