Skip to content

蚂蚁加速app下载

I am an AI researcher.  But I take a slighty unusual approach to the subject, and I also have a peculiar backstory...

一加 8 系列新品正式发布 肉眼可见的出类拔萃-千龙网·中国 ...:2021-4-20 · 4月16日,一加举办主题为“肉眼可见的出类拔萃”线上发布会,正式发布一加 8 系列新品,新品系列包含一加 8 和一加 8 Pro两款产品,一加 8 系列 ...

搜索 - 荔枝网 - JSTV.COM:2021-12-4 · 互联网新闻信息服务许可证:32120210003 苏ICP备07025745号-1 公安备案号:32021202110067 信息网络传播视听节目许可证号:1003036 违法和不良信息举报电话:(025)83187902 中国互联网举报中心 举报邮箱:litchi@vip.jsbc.com 跟帖评论自律管理承诺书 网络举报APP下载

Then, everything seemed to change.

At the heart of the change was a fight, started by an AI faction--they called themselves the 'Neats'--who believed that AI should be done by mathematicians.  They called other AI researchers 'Scruffs' -- and what they meant by 'Scruffs' were people who built AI programs that were exploratory, or inspired by human psychology.

This was all happening in a period that was later called the Second AI Winter, and by the time Spring arrived (roughly, the early 90s) the Neats had won.  In the course of a few years (roughly 1987 to 1992) they took over all the academic positions of power, and before long it became darned hard for anyone who was labeled a 'scruff' to get a job, get funding, get students, or get published.  Of course, there were exceptions, but this was the big picture.

So, where do I come into this?

I graduated at the beginning of the fight.  I didn't know it at the time, but I would have been classified as a "super-scruff" -- a scruff who also believed that 'complex systems' were important in AI.  And if the neats hated one thing more than psychology and hacking, it was the whole idea of a complex system--because one implication of the complex-system concept is that intelligence might be intrinsically emergent, and if it really is emergent you can't use mathematics to build AI systems.

So, as soon as I graduated, my career was doomed.  That entire branch of AI that involved combining complex systems with cognitive psychology (my specialty) was wiped out by the ascendance of the Neats.

搜索 - 荔枝网 - JSTV.COM:2021-12-4 · 互联网新闻信息服务许可证:32120210003 苏ICP备07025745号-1 公安备案号:32021202110067 信息网络传播视听节目许可证号:1003036 违法和不良信息举报电话:(025)83187902 中国互联网举报中心 举报邮箱:litchi@vip.jsbc.com 跟帖评论自律管理承诺书 网络举报APP下载

But (unlike many scruffs), I never gave up on my AI research.

In 2006 I gave a paper at the first workshop on Artificial General Intelligence, in Bethesda, Maryland.  That paper ("Complex Systems, Artificial Intelligence and Theoretical Psychology") explained that AI had a serious problem at its heart, because it was ignoring the fact that if you want to get an intelligent system working, you inevitably and unavoidably have to accept that it will be a complex system.  This, I pointed out, implied that we had to take drastic action, because all modern AI is built on an assumption that you can always ignore complex systems effects.  Later in the paper I suggested a way (a methodology) to get around this problem.

The funny thing is ... right now (2019) the AI/ML community is slowly, slowly, waking up to the ideas that I have been pushing all along.  There have been rumblings, in the Machine Learning community, about how their methodology is looking more and more like "alchemy" and although it might not be obvious, this tendency is something I predicted in that 2006 paper (the original title of the paper was "Cognitive Alchemy", but I was pressured to change it because it was thought to be unscientific).

But, sadly, at the rate things are progressing it will take the AI community another ten or twenty years to finally understand the relationship between complex systems, machine intelligence, and cognitive psychology.  Another ten or twenty years of wasted effort.

So, if you want to know what AI will be like in a couple of decades, or if you prefer not to have to wait that long, you know who to ask.