{"id":308,"date":"2017-08-13T09:16:27","date_gmt":"2017-08-12T23:16:27","guid":{"rendered":"https:\/\/www.alexshoolman.com\/blog\/?p=308"},"modified":"2024-08-08T10:28:14","modified_gmt":"2024-08-08T00:28:14","slug":"news-google-teaches-machines-to-beat-humans-at-machine-learning","status":"publish","type":"post","link":"https:\/\/www.alexshoolman.com\/blog\/2017\/08\/13\/news-google-teaches-machines-to-beat-humans-at-machine-learning\/","title":{"rendered":"Google&#8217;s Skynet Now Beats Humans At Machine Learning"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"aligncenter wp-image-412 size-large\" src=\"https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Wall-e-1024x682.jpeg\" alt=\"Walle used Machine Learning\" width=\"750\" height=\"500\" title=\"\" srcset=\"https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Wall-e-1024x682.jpeg 1024w, https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Wall-e-300x200.jpeg 300w, https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Wall-e-768x512.jpeg 768w, https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Wall-e-360x240.jpeg 360w, https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Wall-e.jpeg 1280w\" sizes=\"auto, (max-width: 750px) 100vw, 750px\" \/><\/p>\n<p>Some new and exciting Machine Learning news from Google&#8217;s &#8220;Google Brain&#8221; team today. They&#8217;ve recently been working on getting computers to do their own Machine Learning configuration with some great results. The latest paper &#8220;<a href=\"https:\/\/arxiv.org\/abs\/1707.07012\" target=\"_blank\" rel=\"noopener\">Learning Transferable Architectures for Scalable Image Recognition&#8221;<\/a>\u00a0states that they&#8217;ve been able to produce a new way of building up ML models that beats even the best human built ones.<\/p>\n<p>Hopefully this explanation doesn&#8217;t get too meta for everyone but here goes! Normally in Machine Learning you have to tune and hand develop the &#8220;model&#8221; that the ML program uses. This might mean having more nodes, less parameters and so on. There are a large amount of ways you can build a Machine Learning system and as such, it&#8217;s almost an art form. Usually the more experienced programmers know what to tweak and so get the best results.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2018\/03\/So-Hot-Right-Now.jpg\" alt=\"\" width=\"620\" height=\"497\" class=\"aligncenter size-full wp-image-1581\" title=\"\" srcset=\"https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2018\/03\/So-Hot-Right-Now.jpg 620w, https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2018\/03\/So-Hot-Right-Now-300x240.jpg 300w\" sizes=\"auto, (max-width: 620px) 100vw, 620px\" \/><\/p>\n<p>In the case of Google Brain, they decided to go all meta and get the\u00a0<strong><em>machine\u00a0<\/em><\/strong>to do this tweaking work. So now we have a machine learning how to structure and program a machine learning program. Yeah&#8230; pretty sweet huh? The way they went about this was to use the\u00a0Neural Architecture Search (NAS) framework to\u00a0search for a good architecture on the small CIFAR-10 image classification dataset. They then transferred the best found architecture to the much larger ImageNet dataset.<\/p>\n<p>This allowed them to save a huge amount of time searching for the best architectures to use reducing it&#8217;s time from 4 weeks to just 4 days! Once they found the best architecture on the CIFAR-10 dataset they applied it to the ImageNet dataset. With some minor modifications it then resulted in them getting the following results:<\/p>\n<blockquote><p>On ImageNet, an architecture constructed from the best cell achieves state-of-the-art accuracy of 82.3% top-1 and 96.0% top-5, <strong>which is 0.8% better in top-1 accuracy than the best human-invented architectures<\/strong> while having 9 billion fewer FLOPS.<\/p><\/blockquote>\n<div id=\"attachment_411\" style=\"width: 760px\" class=\"wp-caption aligncenter\"><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-411\" class=\"wp-image-411 size-large\" src=\"https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Research-At-Google-1024x427.png\" alt=\"Machine Learning at Google\" width=\"750\" height=\"313\" title=\"\" srcset=\"https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Research-At-Google-1024x427.png 1024w, https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Research-At-Google-300x125.png 300w, https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Research-At-Google-768x320.png 768w, https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Research-At-Google.png 1200w\" sizes=\"auto, (max-width: 750px) 100vw, 750px\" \/><p id=\"caption-attachment-411\" class=\"wp-caption-text\">Source: https:\/\/research.google.com<\/p><\/div>\n<h3>What This Means For Machine Learning<\/h3>\n<p>So let&#8217;s just stop for a second and think about this. They used computers to design better computer programs that beat\u00a0<em><strong>the best human programs <\/strong><\/em>even while using fewer resources. That to me is pretty damn impressive! I hope you can also see how important this might be in the future too. If this type of thing catches on and can be applied in other fields it could really speed things up. We&#8217;re now having machines help <em><strong>other<\/strong> <\/em>machines to learn on their own. Quite fascinating! If you&#8217;re interested in getting into Machine Learning I&#8217;ve recently created a great piece on\u00a0<a href=\"https:\/\/www.alexshoolman.com\/blog\/2017\/08\/16\/new-free-tool-the-ultimate-guide-to-building-a-deep-learning-computer\/\">how to build your own Deep Learning computer<\/a><\/p>\n<p>Do you think we should be going down this path of having machines teaching other machines or is it too dangerous? Let us know in the comments below!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Some new and exciting Machine Learning news from Google&#8217;s &#8220;Google Brain&#8221; team today. They&#8217;ve recently been working on getting computers to do their own Machine Learning configuration with some great results. The latest paper &#8220;Learning Transferable Architectures for Scalable Image Recognition&#8221;\u00a0states that they&#8217;ve been able to produce a new way [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":412,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[40,39],"tags":[25],"class_list":["post-308","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-robotics","category-news","tag-machine-learning"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"https:\/\/www.alexshoolman.com\/blog\/wp-content\/uploads\/2017\/08\/Wall-e.jpeg","jetpack_shortlink":"https:\/\/wp.me\/p92j6e-4Y","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/posts\/308","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/comments?post=308"}],"version-history":[{"count":10,"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/posts\/308\/revisions"}],"predecessor-version":[{"id":4884,"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/posts\/308\/revisions\/4884"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/media\/412"}],"wp:attachment":[{"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/media?parent=308"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/categories?post=308"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.alexshoolman.com\/blog\/wp-json\/wp\/v2\/tags?post=308"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}