{"id":300784,"date":"2025-11-19T08:05:15","date_gmt":"2025-11-19T08:05:15","guid":{"rendered":"https:\/\/www.newsbeep.com\/us\/300784\/"},"modified":"2025-11-19T08:05:15","modified_gmt":"2025-11-19T08:05:15","slug":"the-false-glorification-of-yann-lecun","status":"publish","type":"post","link":"https:\/\/www.newsbeep.com\/us\/300784\/","title":{"rendered":"The False Glorification of Yann LeCun"},"content":{"rendered":"<p>With the support of Meta, and the unwitting assistance of the press, Yann LeCun has for the last decade run one of the more successful PR campaigns in recent scientific history, persistently allowing himself to be painted as the inventor of ideas, techniques, and arguments to which he is not the inventor of.  The culmination of that PR campaign came Friday, with <a href=\"https:\/\/www.wsj.com\/tech\/ai\/yann-lecun-ai-meta-0058b13c\" rel=\"nofollow noopener\" target=\"_blank\">a puff piece in <\/a><a href=\"https:\/\/www.wsj.com\/tech\/ai\/yann-lecun-ai-meta-0058b13c\" rel=\"nofollow noopener\" target=\"_blank\">The Wall Street Journal<\/a>, tied to a new startup LeCun is apparently launching, with a grossly misleading headline that styled LeCun as a lone genius. <\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!Fbie!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f23da67-0ee2-4b5e-8932-ce527f014070_1237x1604.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/11\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/3f23da67-0ee2-4b5e-8932-ce527f014070_1237.jpeg\" width=\"1237\" height=\"1604\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/3f23da67-0ee2-4b5e-8932-ce527f014070_1237x1604.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1604,&quot;width&quot;:1237,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1828901,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/179057564?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3f23da67-0ee2-4b5e-8932-ce527f014070_1237x1604.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   fetchpriority=\"high\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>The headline is wrong in almost every way possible: LeCun has not been right about everything, or even consistent in his own views over time, particularly with respect to LLMs (see below). In many of the things that he is now challenging, he is far from alone, though the headlines sometimes paint him like that, and far from being the first in raising the challenges that he has raised.  <\/p>\n<p>In fact, most of what LeCun says has a long history that precedes him. And in every way possible he conveniently ignores that history, in order to exaggerate the originality of his ideas. By and large he has been effective at this. The myth of him as lone genius has worked for him, at least in the popular imagination. <\/p>\n<p>But it is not true \u2014 or even remotely so. <\/p>\n<p>For the most part, LeCun is nowadays known in the general public for five ideas: convolutional neural networks, his critique of large language models,  his critique of the scaling hypothesis, his advocacy of commonsense and physical reasoning, and his advocacy of world models. The thing is, (a) he originated exactly none of these ideas, and (b) he rarely if ever credits any of the people who actually did. This reflects a consistent pattern known as the <a href=\"https:\/\/ori.hhs.gov\/plagiarism-ideas\" rel=\"nofollow noopener\" target=\"_blank\">plagiarism of ideas<\/a>.<\/p>\n<p>Per the US Office of Research Integrity, the plagiarism of ideas is defined as \u201cAppropriating someone else\u2019s idea (e.g., an explanation, a theory, a conclusion, a hypothesis, a metaphor) in whole or in part, or with superficial modifications without giving credit to its originator\u201d.   He has done this over and over and over.<\/p>\n<p><a href=\"https:\/\/en.wikipedia.org\/wiki\/Convolutional_neural_network\" rel=\"nofollow noopener\" target=\"_blank\">Convolutional neural networks<\/a> (CNNS) are, without a doubt, a foundational contribution to AI; they found applications in image recognition, speech recognition, natural language processing, recommendation systems, and many other areas. Until large language models became dominant, they were one of the leading techniques used in machine learning. (The LSTM, published by <a href=\"https:\/\/www.semanticscholar.org\/paper\/Long-Short-Term-Memory-Hochreiter-Schmidhuber\/2e9d221c206e9503ceb452302d68d10e293f2a10\" rel=\"nofollow noopener\" target=\"_blank\">Hochreiter and Schmidhuber, 1997<\/a>, was also in very widespread use, <a href=\"https:\/\/arxiv.org\/pdf\/1704.04760\" rel=\"nofollow noopener\" target=\"_blank\">exceeding CNNs in commercial use according to one study<\/a>.) And there is no doubt that LeCun played a role in developing convolutional neural networks. But he neither invented them nor was he the first to apply the back-propagation algorithm to learning their weights (though many people mistakenly believe this).<\/p>\n<p> The foundational work was done by Kunihiko Fukushima in 1979-1980; Wei Zhang et al (1988) beat LeCun to adding the back-propagation to convolutional neural networks in little-known work in <a href=\"https:\/\/people.idsia.ch\/~juergen\/Zhang-1988-shift-invariant-NN-JSAP-WithEnglishTranslation.pdf\" rel=\"nofollow noopener\" target=\"_blank\">1988<\/a> that was published in Japanese (with an English abstract). LeCun had the good fortune of publishing in more prominent places the next year in English, and he has devised important tricks for improving their performance, but his work wasn\u2019t  first.  LeCun rarely mentions his predecessors.<\/p>\n<p>Schmidhuber has documented this ongoing pattern numerous times, presenting receipts <a href=\"https:\/\/x.com\/SchmidhuberAI\/status\/1952007922721919219?s=20\" rel=\"nofollow\">in a short history of convolution neural networks<\/a>,  in documentation <a href=\"https:\/\/x.com\/SchmidhuberAI\/status\/1544939700099710976?s=20\" rel=\"nofollow\">of how a key paper by LeCun neglects critical past work<\/a>, in explication of <a href=\"https:\/\/x.com\/SchmidhuberAI\/status\/1594964463727570945?s=20\" rel=\"nofollow\">how a list by LeCun of key recent inventions again neglected critical past work<\/a>, and <a href=\"https:\/\/x.com\/SchmidhuberAI\/status\/1735313711240253567?s=20\" rel=\"nofollow\">in a detailed discussion of LeCun and his collaborators\u2019s consistent omissions of previous work<\/a>.<\/p>\n<p>Yet LeCun often slights Zhang\u2019s pioneering work; strikingly, <a href=\"https:\/\/github.com\/CodeRayZhang\/Deep-Learning-Papers-Reading-Roadmap\/blob\/master\/1.1-Survey\/LeCun%2C%20Yann%2C%20Yoshua%20Bengio%2C%20and%20Geoffrey%20Hinton.%20Deep%20learning.%20Nature%20521.7553%20(2015).pdf\" rel=\"nofollow noopener\" target=\"_blank\">no mention of it was made at all in LeCun\u2019s most cited survey<\/a>.<\/p>\n<p>The Wall Street Journal article is in part about LeCun\u2019s critique of LLMs, and his critique has been noted in the press numerous times. In no way was LeCun there first, either.  <\/p>\n<p>I was likely the first, with a series of challenges to GPT-2 and GPT-3 on Twitter in the fall of <a href=\"https:\/\/x.com\/garymarcus\/status\/1188803198980521986?s=61\" rel=\"nofollow\">2019<\/a> and in a series of articles in <a href=\"https:\/\/thegradient.pub\/gpt2-and-the-nature-of-intelligence\/\" rel=\"nofollow noopener\" target=\"_blank\">2019<\/a> and <a href=\"https:\/\/www.technologyreview.com\/2020\/08\/22\/1007539\/gpt3-openai-language-generator-artificial-intelligence-ai-opinion\/\" rel=\"nofollow noopener\" target=\"_blank\">2020<\/a>.<\/p>\n<p>What is striking is that LeCun was, at the time, publicly hostile to those critiques, accusing me of a \u201c<a href=\"https:\/\/twitter.com\/ylecun\/status\/1188902027495006208?s=20&amp;t=rZorYMVHU32iCXJmFfICOQ\" rel=\"nofollow noopener\" target=\"_blank\">rearguard action<\/a>.\u201d I spent much of the next several years arguing against LLMs; LeCun frequently tussled with me, and never once publicly supported my critique. (It\u2019s only when ChatGPT eclipsed Meta that LeCun began to be sharply, publicly critical of LLMs) LeCun has in fact probably never cited my own critiques, and always presented his own critiques as if they were his own original ideas.<\/p>\n<p>Likewise, LeCun rarely if ever cites Emily Bender et al\u2019s 2021 prominent <a href=\"https:\/\/dl.acm.org\/doi\/10.1145\/3442188.3445922\" rel=\"nofollow noopener\" target=\"_blank\">stochastic parrots<\/a> paper, an influential and important critique of LLMs that also preceded the era in which LeCun began to loudly critique LLMs.  <\/p>\n<p>Over a year later, November 2022, LeCun was still loudly promoting (his company\u2019s) LLMs as \u201camazing work\u201d:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!L99K!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad8adc8d-63de-4532-80f0-302945344bf4_1027x686.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/11\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/ad8adc8d-63de-4532-80f0-302945344bf4_1027.jpeg\" width=\"1027\" height=\"686\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/ad8adc8d-63de-4532-80f0-302945344bf4_1027x686.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:686,&quot;width&quot;:1027,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:150541,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/179057564?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fad8adc8d-63de-4532-80f0-302945344bf4_1027x686.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>He only fully changed his mind a few weeks later, after Galactica tanked and ChatGPT ate his lunch. Consistent \u201cfor 40 years\u201d he has not been.<\/p>\n<p>Most ludicrous of all is the WSJ portrayal of LeCun as somehow isolated in doubting that LLMs can reach AGI.  A recent AAAI survey showed that this is in fact <a href=\"https:\/\/aaai.org\/about-aaai\/presidential-panel-on-the-future-of-ai-research\/\" rel=\"nofollow noopener\" target=\"_blank\">the majority opinion among a broad sampling of researchers and academic scientists, by a wide margin<\/a>. <\/p>\n<p>LeCun is also making waves for his criticism of scaling. The Wall St. Journal reports:<\/p>\n<p>We are not going to get to human-level AI just by scaling LLMs,\u201d [LeCun] said on Alex Kantrowitz\u2019s <a href=\"https:\/\/www.youtube.com\/watch?v=4__gg83s_Do&amp;mod=ANLink\" rel=\"nofollow noopener\" target=\"_blank\">Big Technology podcast<\/a> this spring. \u201cThere\u2019s no way, absolutely no way, and whatever you can hear from some of my more adventurous colleagues, it\u2019s not going to happen within the next two years. There\u2019s absolutely no way in hell to\u2013pardon my French.\u201d<\/p>\n<p>But, again, LeCun wasn\u2019t here first. Instead, I was probably the first person to doubt this publicly, back in <a href=\"https:\/\/nautil.us\/deep-learning-is-hitting-a-wall-238440\/\" rel=\"nofollow noopener\" target=\"_blank\">2022<\/a>:<\/p>\n<p>There are serious holes in the scaling argument. To begin with, the measures that have scaled have not captured what we desperately need to improve: genuine comprehension\u2026 <\/p>\n<p>What\u2019s more, the so-called scaling laws aren\u2019t universal laws like gravity but rather mere observations that might not hold forever, much like Moore\u2019s law, a trend in computer chip production that held for decades but arguably <a href=\"https:\/\/www.nytimes.com\/2015\/09\/27\/technology\/smaller-faster-cheaper-over-the-future-of-computer-chips.html\" rel=\"nofollow noopener\" target=\"_blank\">began to slow<\/a> a decade ago.11<\/p>\n<p>At the time LeCun was hardly supportive; instead he trolled my critique on Facebook. In no way has LeCun ever publicly acknowledged that I was on target with my conjecture that scaling LLMs would not lead to AGI.  To the contrary, LeCun has repeatedly pretended that the anti-scaling idea originated with him, and he continues to falsely paint himself as the first and lone person to see this. <\/p>\n<p>Another common argument of LeCun in recent years is to say that LLMs lack common sense and are poor at physical reasoning.  For years, though, he hardly seemed to emphasize the problem at all;  <a href=\"https:\/\/hal.science\/hal-04206682\/file\/Lecun2015.pdf\" rel=\"nofollow noopener\" target=\"_blank\">in LeCun\u2019s famous 2015 Nature paper on deep learning, common sense is only mentioned once, in passing, with zero citations<\/a>. <\/p>\n<p>In reality, others that he rarely or never cites had been concerned about the problem for years, going back to John McCarthy (one of AI\u2019s actual godfathers) in the late 1950\u2019s, Pat Hayes in the 1970s and <a href=\"https:\/\/www.cs.unibo.it\/~nuzzoles\/courses\/intelligenza-artificiale\/exam\/3-second-naive-physics-manifesto.pdf\" rel=\"nofollow noopener\" target=\"_blank\">1980s<\/a> and in this last two decades by my long-term collaborator Ernest Davis, literally in the same department as LeCun at NYU for the last quarter century, yet Davis\u2019 work on commonsense reasoning is mentioned by LeCun scandalously rarely. <\/p>\n<p>Also mentioned in the WSJ article is that LeCun is excited about <a href=\"https:\/\/garymarcus.substack.com\/publish\/post\/164369506\" rel=\"nofollow noopener\" target=\"_blank\">world models<\/a>, \u201ca technology that LeCun thinks is more likely to advance the state of AI than Meta\u2019s current language models\u201d. <\/p>\n<p>As it happens, the idea of world models is not new. <a href=\"https:\/\/open.substack.com\/pub\/garymarcus\/p\/generative-ais-crippling-and-widespread?r=8tdk6&amp;utm_campaign=post&amp;utm_medium=web&amp;showWelcomeOnShare=false\" rel=\"nofollow noopener\" target=\"_blank\">As noted here a few months ago<\/a>, it goes back to the 1950s, and Herb Simon\u2019s <a href=\"https:\/\/en.wikipedia.org\/wiki\/General_Problem_Solver\" rel=\"nofollow noopener\" target=\"_blank\">General Problem Solver<\/a>. Schmidhuber has been advocating adding world models to neural networks, as far back as <a href=\"https:\/\/people.idsia.ch\/~juergen\/world-models-planning-curiosity-fki-1990.html\" rel=\"nofollow noopener\" target=\"_blank\">1990<\/a>, and in important technical work <a href=\"https:\/\/arxiv.org\/abs\/1511.09249\" rel=\"nofollow noopener\" target=\"_blank\">in 2015<\/a>, as  well as <a href=\"https:\/\/arxiv.org\/abs\/1803.10122\" rel=\"nofollow noopener\" target=\"_blank\">in a more recent article with David Ha<\/a>, now CEO of the very well-funded Sakana Labs. True to form, LeCun rarely refers to this work in his public presentations.  <\/p>\n<p>Likewise, Ernest Davis and I argued strenuously that the field should pay more attention to world (cognitive) models in our 2019 book Rebooting AI, which LeCun was dismissive of. Challenge with LLMs representing world models have been central to my own critiques of LLMs since the fall of 2019, and were one of four key foci in my 2020 article <a href=\"https:\/\/arxiv.org\/abs\/2002.06177\" rel=\"nofollow noopener\" target=\"_blank\">The Next Decade in AI<\/a>, perhaps the first lengthy discussion of the need for integrating world (cognitive) models specifically with LLMs. I don\u2019t believe LeCun has ever acknowledged any of this, except in 2019 <a href=\"https:\/\/twitter.com\/ylecun\/status\/1188902027495006208?s=20&amp;t=rZorYMVHU32iCXJmFfICOQ\" rel=\"nofollow noopener\" target=\"_blank\">when he originally dismissed my claims<\/a>.   <\/p>\n<p>Similarly, Fei-Fei Li is building a world-model focused AI startup, <a href=\"https:\/\/www.worldlabs.ai\/\" rel=\"nofollow noopener\" target=\"_blank\">World Labs<\/a>; based on past experience, I will be surprised if LeCun ever gives her endeavors much mention.<\/p>\n<p>Someone on X summed it up well Saturday:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!QFV_!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e819c3e-b08c-4257-b8d9-5493e54ffb7f_1206x1276.jpeg\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 is-viewable-img can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/11\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/8e819c3e-b08c-4257-b8d9-5493e54ffb7f_1206.jpeg\" width=\"1206\" height=\"1276\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/8e819c3e-b08c-4257-b8d9-5493e54ffb7f_1206x1276.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1276,&quot;width&quot;:1206,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:228180,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/179057564?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8e819c3e-b08c-4257-b8d9-5493e54ffb7f_1206x1276.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>The eminent AI researcher Hector Zenil made similar points on LinkedIn on Sunday morning:<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/substackcdn.com\/image\/fetch\/$s_!W6Jp!,f_auto,q_auto:good,fl_progressive:steep\/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe37e5392-3931-4042-aa2d-9991544fe980_1135x214.png\" data-component-name=\"Image2ToDOM\" rel=\"nofollow noopener\" class=\"image-link image2 can-restack\"><img decoding=\"async\" src=\"https:\/\/www.newsbeep.com\/us\/wp-content\/uploads\/2025\/11\/https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/e37e5392-3931-4042-aa2d-9991544fe980_1135.jpeg\" width=\"1135\" height=\"214\" data-attrs=\"{&quot;src&quot;:&quot;https:\/\/substack-post-media.s3.amazonaws.com\/public\/images\/e37e5392-3931-4042-aa2d-9991544fe980_1135x214.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:214,&quot;width&quot;:1135,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:82946,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image\/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https:\/\/garymarcus.substack.com\/i\/179057564?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe37e5392-3931-4042-aa2d-9991544fe980_1135x214.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}\" alt=\"\"   loading=\"lazy\" class=\"sizing-normal\"\/><\/a><\/p>\n<p>Yann LeCun has, without a doubt made genuine contributions to AI, and I am pleased to see him speak out again the limits on LLMs. But he has also systematically dismissed and ignored the work of others for years, including Schmidhuber, Fukushima, Zhang, Bender, Li and myself, in order to exaggerate his own contributions. With the help of Meta\u2019s media lobby he has succeeded in fooling most of the press and some fraction of the public. LeCun has lapped it up, and done absolutely nothing to set the record straight. <\/p>\n<p>But the myths about the originality of his thought simply aren\u2019t true. <\/p>\n<p>Whether he can produce genuinely original ideas in his new startup remains to be seen.<\/p>\n<p>Gary Marcus began critiquing traditional neural networks and calling for hybrid neurosymbolic architectures in his first publication in 1992, advocated vociferously for neurosymbolic cognitive models in his 2001 book The Algebraic Mind, in which he anticipated current troubles with hallucinations and unreliable reasoning. He first warned how these limits would apply to LLMs in 2019, emphasizing their lack of stable world models.<\/p>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n","protected":false},"excerpt":{"rendered":"With the support of Meta, and the unwitting assistance of the press, Yann LeCun has for the last&hellip;\n","protected":false},"author":2,"featured_media":300785,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[45],"tags":[182,181,507,74],"class_list":{"0":"post-300784","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-artificial-intelligence","8":"tag-ai","9":"tag-artificial-intelligence","10":"tag-artificialintelligence","11":"tag-technology"},"_links":{"self":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/300784","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/comments?post=300784"}],"version-history":[{"count":0,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/posts\/300784\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media\/300785"}],"wp:attachment":[{"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/media?parent=300784"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/categories?post=300784"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.newsbeep.com\/us\/wp-json\/wp\/v2\/tags?post=300784"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}