{"id":1507,"date":"2017-07-12T08:40:00","date_gmt":"2017-07-12T08:40:00","guid":{"rendered":"https:\/\/www.gtechbooster.com\/?p=1507"},"modified":"2017-07-12T08:40:00","modified_gmt":"2017-07-12T08:40:00","slug":"a-brief-history-of-ai","status":"publish","type":"post","link":"https:\/\/gtechbooster.com\/a-brief-history-of-ai\/","title":{"rendered":"A Brief History of\u00a0AI"},"content":{"rendered":"\n<p>The history of artificial intelligence (AI) is a fascinating journey that spans several decades, marked by significant milestones, breakthroughs, and paradigm shifts. The quest to create intelligent machines that can mimic human cognitive functions and perform complex tasks has captivated the imagination of scientists, researchers, and innovators around the world.<\/p>\n\n\n\n<div class=\"gtech-migrated-from-ad-inserter-placement-2\" style=\"text-align: center;\" id=\"gtech-2930812649\"><div style=\"margin-right: auto;margin-left: auto;text-align: center;\" id=\"gtech-349252284\"><a data-bid=\"1\" data-no-instant=\"1\" href=\"https:\/\/gtechbooster.com\/linkout\/76065\" rel=\"noopener\" class=\"notrack\" aria-label=\"26002\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/gtechbooster.com\/media\/2025\/10\/26002.jpg\" alt=\"\"  srcset=\"https:\/\/gtechbooster.com\/media\/2025\/10\/26002.jpg 1200w, https:\/\/gtechbooster.com\/media\/2025\/10\/26002-768x768.jpg 768w\" sizes=\"(max-width: 1200px) 100vw, 1200px\" width=\"500\" height=\"500\"  style=\"display: inline-block;\" \/><\/a><\/div><\/div><p>In spite  of all the current hype, AI is not a new field of study, but it has its  ground in the fifties. If we exclude the pure philosophical reasoning  path that goes from the Ancient Greek to Hobbes, Leibniz, and Pascal, AI  as we know it has been officially started&nbsp;<strong>in 1956 at Dartmouth College<\/strong>, where the most eminent experts gathered to brainstorm on intelligence simulation.<\/p>\n\n\n\n<p>This\n happened only a few years after Asimov set his own three laws of \nrobotics, but more relevantly after the famous paper published by Turing\n (1950), where he proposes for the first time the idea of a&nbsp;<strong><em>thinking machine<\/em><\/strong>&nbsp;and the more popular Turing test to assess whether such machine shows, in fact, any intelligence.<\/p>\n\n\n\n<p>As soon as the research group at Dartmouth publicly released the contents and ideas arisen from that summer meeting, a flow of government funding was reserved for the study of creating a nonbiological intelligence.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">The Birth of AI<\/h2>\n\n\n\n<p>The origins of AI can be traced back to the mid-20th century, with the seminal work of Alan Turing, who proposed the concept of a &#8220;universal machine&#8221; capable of performing any computational task. Turing&#8217;s theoretical framework laid the groundwork for the development of intelligent machines and set the stage for subsequent advancements in AI research.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Early Foundations<\/h2>\n\n\n\n<p>In the 1950s and 1960s, AI pioneers such as John McCarthy, Marvin Minsky, and Herbert Simon made significant contributions to the field, introducing fundamental concepts and laying the theoretical foundations of AI. McCarthy coined the term &#8220;artificial intelligence&#8221; and organized the Dartmouth Conference in 1956, which is widely regarded as the birth of AI as a distinct field of study.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Symbolic AI and Expert Systems<\/h2>\n\n\n\n<p>During the 1960s and 1970s, AI research focused on symbolic reasoning and problem-solving using logical rules and symbolic representations. This period saw the development of expert systems, which aimed to capture human expertise in specific domains and make it accessible through computer programs. Notable examples include MYCIN, an expert system for diagnosing infectious diseases, and DENDRAL, a system for chemical analysis.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">AI Winter and Resurgence<\/h2>\n\n\n\n<p>The 1980s and 1990s were characterized by alternating periods of optimism and skepticism towards AI. The so-called &#8220;AI winter&#8221; referred to phases of reduced funding and waning interest in AI research due to unmet expectations and overhyped promises. However, advancements in machine learning, neural networks, and computational power led to a resurgence of interest in AI, laying the groundwork for future breakthroughs.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The phantom menace<\/strong><\/h3>\n\n\n\n<p>At that  time, AI seemed to be easily reachable, but it turned out that was not  the case. At the end of the sixties, researchers realized that AI was  indeed a tough field to manage, and the initial spark that brought the  funding started dissipating.<\/p>\n\n\n\n<p>This phenomenon, which characterized AI along its all history, is commonly known as \u201c<strong><em>AI effect<\/em><\/strong>\u201d, and is made of two parts:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The constant promise of a real AI coming in the following decade;<\/li>\n\n\n\n<li>The discounting of the AI behavior after it mastered a certain problem, redefining continuously what intelligence means.<\/li>\n<\/ul>\n\n\n\n<p>In the United States, the reason for DARPA to fund AI research was mainly due to the idea of creating a&nbsp;<em>perfect machine translator<\/em>, but two consecutive events wrecked that proposal, beginning what it is going to be called later on the first&nbsp;<em>AI winter<\/em>.<\/p>\n\n\n\n<p>In fact, the&nbsp;<strong>Automatic Language Processing Advisory Committee (ALPAC)<\/strong>&nbsp;report in the US in 1966, followed by the \u201c<strong>Lighthill report\u201d (1973)<\/strong>,\n assessed the feasibility of AI given the current developments and \nconcluded negatively about the possibility of creating a machine that \ncould learn or be considered intelligent.<\/p>\n\n\n\n<p>These two reports, jointly with the limited data available to feed the algorithms, as well as the scarce computational power of the engines of that period, made the field collapsing and AI fell into disgrace for the entire decade.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Attack of the (expert) clones<\/strong><\/h3>\n\n\n\n<p>In the eighties, though, a new wave of funding in UK and Japan was motivated by the introduction of \u201c<strong><em>expert systems<\/em><\/strong>\u201d, which basically were examples of&nbsp;<strong><em>narrow AI<\/em><\/strong>&nbsp;as defined&nbsp;<a rel=\"noreferrer noopener\" href=\"https:\/\/medium.com\/cyber-tales\/artificial-intelligence-what-it-is-and-why-now-4e4431942623#.5aqhg7aqh\" target=\"_blank\">in previous articles<\/a>.<\/p>\n\n\n\n<p>These\n programs were, in fact, able to simulate skills of human experts in \nspecific domains, but this was enough to stimulate a new funding trend. \nThe most active player during those years was the Japanese government, \nand its rush to create the fifth generation computer indirectly forced \nUS and UK to reinstate the funding for research on AI.<\/p><div class=\"gtech-mid-cont\" style=\"text-align: center;\" id=\"gtech-184416747\"><div style=\"margin-right: auto;margin-left: auto;text-align: center;\" id=\"gtech-3007446486\"><a data-bid=\"1\" data-no-instant=\"1\" href=\"https:\/\/gtechbooster.com\/linkout\/17207\" rel=\"noopener\" class=\"notrack\" aria-label=\"26001\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg\" alt=\"\"  srcset=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg 1024w, https:\/\/gtechbooster.com\/media\/2023\/01\/26001-768x960.jpeg 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" width=\"500\" height=\"625\"  style=\"display: inline-block;\" \/><\/a><\/div><\/div>\n\n\n\n<p>This golden age did not last long, though, and when the funding goals were not met, a new crisis began. In 1987, personal computers became more powerful than&nbsp;<strong><em>Lisp Machine<\/em><\/strong>, which was the product of years of research in AI. This ratified the start of the&nbsp;<strong><em>second AI winter<\/em><\/strong>, with the DARPA taking a clear position against AI and further funding.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Machine Learning and Neural Networks<\/h2>\n\n\n\n<p>The late 20th century witnessed significant progress in machine learning algorithms, particularly with the development of neural networks and deep learning models. Researchers such as Geoffrey Hinton, Yann LeCun, and Yoshua Bengio made pioneering contributions to the field, propelling machine learning to the forefront of AI research and application.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>The return of the Jed(AI)<\/strong><\/h3>\n\n\n\n<p><strong>L<\/strong>uckily enough, in 1993 this period ended with the&nbsp;<strong>MIT Cog project<\/strong>&nbsp;to\n build a humanoid robot, and with the Dynamic Analysis and Replanning \nTool (DART)\u200a\u2014\u200athat paid back the US government of the entire funding \nsince 1950\u200a\u2014\u200aand when in 1997 DeepBlue defeated Kasparov at chess, it \nwas clear that AI was back to the top.<\/p>\n\n\n\n<p>In\n the last two decades, much has been done in academic research, but AI \nhas been only recently recognized as a paradigm shift. There are of \ncourse a series of causes that might bring us to understand why we are \ninvesting so much into AI nowadays, but there is a specific event we \nthink it is responsible for the last five-years trend.<\/p>\n\n\n\n<p>If  we look at the following figure, we notice that regardless all the  developments achieved, AI was not widely recognized until the end of  2012. The figure has been indeed created using CBInsights Trends, which  basically plots the trends for specific words or themes (in this case,  Artificial Intelligence and Machine Learning).<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\"><a href=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*ZMlfGaD1R05uY2kNj47seQ.png\"><img decoding=\"async\" src=\"https:\/\/cdn-images-1.medium.com\/max\/1600\/1*ZMlfGaD1R05uY2kNj47seQ.png\" alt=\"\"\/><\/a><figcaption class=\"wp-element-caption\"><strong> Artificial intelligence trend for the period 2012\u20132016. <\/strong><\/figcaption><\/figure>\n\n\n\n<p>More in details, I drew a line on a specific date I thought to be the real trigger of this new AI optimistic wave, i.e.,&nbsp;<strong>Dec. 4th 2012<\/strong>.  That Tuesday, a group of researchers presented at the Neural  Information Processing Systems (NIPS) conference detailed information  about their convolutional neural networks that granted them the first  place in the&nbsp;<strong>ImageNet Classification<\/strong>&nbsp;competition few weeks before (Krizhevsky et al., 2012). <\/p>\n\n\n\n<p>Their work improved the classification algorithm&nbsp;<strong>from 72% to 85%&nbsp;<\/strong>and set the use of neural networks as fundamental for artificial intelligence.<\/p>\n\n\n\n<p>In less than two years, advancements in the field brought classification in the ImageNet contest to reach&nbsp;<strong>an accuracy of 96%, slightly higher than the human one (about 95%)<\/strong>.<\/p>\n\n\n\n<p>The picture shows also three major growth trends in AI development (the broken dotted line), outlined by three major events:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The 3-years-old&nbsp;<strong>DeepMind<\/strong>&nbsp;being acquired by Google in Jan. 2014;<\/li>\n\n\n\n<li>The open letter of the&nbsp;<strong>Future of Life Institute<\/strong>&nbsp;signed by more than 8,000 people and the study on reinforcement learning released by Deepmind (Mnih et al., 2015) in Feb. 2015;<\/li>\n\n\n\n<li>The paper published in Nature on Jan. 2016 by DeepMind scientists on&nbsp;<strong>neural networks<\/strong>&nbsp;(Silver et al., 2016) followed by the impressive victory of AlphaGo over Lee Sedol in March 2016 (followed by a list of&nbsp;other impressive achievements\u200a\u2014\u200acheck out the article of&nbsp;<a href=\"https:\/\/medium.com\/@ednewtonrex\" target=\"_blank\" rel=\"noreferrer noopener\">Ed Newton-Rex<\/a>).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\">AI in the 21st Century<\/h2>\n\n\n\n<p>The 21st century has seen an unprecedented acceleration in AI research and deployment, driven by advancements in data analytics, cloud computing, and the availability of large-scale datasets. Breakthroughs in natural language processing, computer vision, and reinforcement learning have led to transformative applications in areas such as autonomous vehicles, healthcare, finance, and more.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>A new hope<\/strong><\/h3>\n\n\n\n<p><strong>A<\/strong>I is intrinsically highly dependent on funding because it is a&nbsp;<strong>long-term research field<\/strong>&nbsp;that requires an immeasurable amount of effort and resources to be fully depleted.<\/p>\n\n\n\n<p>There\n are then raising concerns that we might currently live the next peak \nphase (Dhar, 2016), but also that the thrill is destined to stop soon.<\/p>\n\n\n\n<p>However, as many others, I believe that this new era is different for three main reasons:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>(Big) data<\/strong>, because we finally have the bulk of data needed to feed the algorithms;<\/li>\n\n\n\n<li>The&nbsp;<strong>technological progress<\/strong>,  because the storage ability, computational power, algorithm  understanding, better and greater bandwidth, and lower technology costs  allowed us to actually make the model digesting the information they  needed;<\/li>\n\n\n\n<li>The&nbsp;<strong>resources democratization<\/strong>&nbsp;and  efficient allocation introduced by Uber and Airbnb business models,  which is reflected in cloud services (i.e., Amazon Web Services) and  parallel computing operated by GPUs.<\/li>\n<\/ul>\n\n\n\n<div class=\"wp-block-essential-blocks-accordion\"><div class=\"eb-parent-wrapper eb-parent-eb-accordion-xw2jw \"><div class=\"eb-accordion-container eb-accordion-xw2jw\" data-accordion-type=\"accordion\" data-tab-icon=\"fas fa-angle-right\" data-expanded-icon=\"fas fa-angle-down\" data-transition-duration=\"500\"><div class=\"eb-accordion-inner\">\n<div class=\"eb-accordion-item-w3msj eb-accordion-wrapper\" data-clickable=\"false\"><div class=\"eb-accordion-title-wrapper\" tabindex=\"0\"><span class=\"eb-accordion-icon-wrapper\"><span class=\"fas fa-angle-right eb-accordion-icon\"><\/span><\/span><div class=\"eb-accordion-title-content-wrap\"><h3 class=\"eb-accordion-title\"><strong>References<\/strong><\/h3><\/div><\/div><div class=\"eb-accordion-content-wrapper\"><div class=\"eb-accordion-content\">\n<ul class=\"wp-block-list\">\n<li>Dhar, V. (2016). \u201cThe Future of Artificial Intelligence\u201d. Big Data, 4(1): 5\u20139.<\/li>\n\n\n\n<li>Krizhevsky, A., Sutskever, I., Hinton, G.E. (2012). \u201cImagenet classification with deep convolutional neural networks\u201d. Advances in neural information processing systems: 1097\u20131105.<\/li>\n\n\n\n<li>Lighthill, J. (1973). \u201cArtificial Intelligence: A General Survey\u201d. In Artificial Intelligence: a paper symposium, Science Research Council.<\/li>\n\n\n\n<li>Mnih, V., et al. (2015). \u201cHuman-level control through deep reinforcement learning\u201d. Nature, 518: 529\u2013533.<\/li>\n\n\n\n<li>Silver, D., et al. (2016). \u201cMastering the game of Go with deep neural networks and tree search\u201d. Nature, 529: 484\u2013489.<\/li>\n\n\n\n<li>Turing, A. M. (1950). \u201cComputing Machinery and Intelligence\u201d. Mind, 49: 433\u2013460.<\/li>\n<\/ul>\n<\/div><\/div><\/div>\n<\/div><\/div><\/div><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Ethical and Societal Implications<\/h2>\n\n\n\n<p>As AI technologies become increasingly integrated into everyday life, concerns about ethics, bias, privacy, and the societal impact of AI have come to the forefront. Efforts to ensure responsible AI development, transparency, and ethical use of AI are ongoing, reflecting the need for thoughtful consideration of the implications of AI on society.<\/p>\n\n\n\n<p class=\"cls has-palette-color-14-color has-palette-color-1-background-color has-text-color has-background has-link-color wp-elements-5a7e812a9aec9fc47f5fb1e915cd6612\">The evolution of artificial intelligence (AI) has been a remarkable journey, marked by significant advancements, paradigm shifts, and transformative applications across various domains. Over time, AI has undergone rapid development, driven by technological innovation, increased computational power, and the growing availability of large-scale datasets. As AI continues to evolve, it promises to revolutionize industries, transform human experiences, and raise profound questions about the nature of intelligence and the role of machines in our lives. The irony is that We are the ones who will build and are building.  <\/p>\n<div class=\"gtech-end-cont\" id=\"gtech-988278091\"><div style=\"margin-right: auto;margin-left: auto;text-align: center;\" id=\"gtech-2696648145\"><a data-bid=\"1\" data-no-instant=\"1\" href=\"https:\/\/gtechbooster.com\/linkout\/17207\" rel=\"noopener\" class=\"notrack\" aria-label=\"26001\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg\" alt=\"\"  srcset=\"https:\/\/gtechbooster.com\/media\/2023\/01\/26001.jpeg 1024w, https:\/\/gtechbooster.com\/media\/2023\/01\/26001-768x960.jpeg 768w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" width=\"500\" height=\"625\"  style=\"display: inline-block;\" \/><\/a><\/div><\/div>","protected":false},"excerpt":{"rendered":"<p>The history of AI is a testament to human ingenuity, perseverance, and the relentless pursuit of creating intelligent machines. As AI continues to evolve, it promises to revolutionize industries, transform human experiences, and raise profound questions about the nature of intelligence and the role of machines in our lives.<\/p>\n","protected":false},"author":7,"featured_media":6754,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5],"tags":[2324,46,118,235,247,528],"class_list":["post-1507","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-features","tag-artificial-general-intelligence-agi","tag-artificial-intelligence","tag-big-data","tag-data-science","tag-deep-learning","tag-machine-learning"],"blocksy_meta":{"styles_descriptor":{"styles":{"desktop":"","tablet":"","mobile":""},"google_fonts":[],"version":6}},"_links":{"self":[{"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/posts\/1507","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/comments?post=1507"}],"version-history":[{"count":0,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/posts\/1507\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/media\/6754"}],"wp:attachment":[{"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/media?parent=1507"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/categories?post=1507"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/gtechbooster.com\/api-json\/wp\/v2\/tags?post=1507"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}