At the first AGI conference (Memphis, '08), I wondered aloud of the need to disperse nonimportant data in order to achieve the functioning "general"... ResNet delivers ...
Wow, attending the very first AGI conference in ’08 must have been incredible!
I love the point about filtering out the non-important data to reach something truly general—that resonates with what we’re seeing in the classic ResNet work and beyond.
(Fun fact: I once thought I’d made it to the inaugural INET conference in ’95—turns out the real first one was in ’89. History always has an earlier starting line than we think!)
Thanks for sharing this memory, David—insightful context like this is exactly why I’m running the “Wolf Reads AI” series.
It was something... run by Goertzel and meeting folks like Hall, Adams and more was really something!
At that time, was looking at AGI from the media side and have been studying/promoting ever since... had no idea definition I support (Machine becomes Itself) has been watered down like it has...
When you say watered down, where do you think it was going when you went to that first conference? Or, what was the general vision that was presented to the media back then?
The conception of the machine becoming conscious and the need to develop neural network imitating human brain was really something.
The OpenAI definition (always followed by "it's here") related to $$ produced scaled up is not right. Plus we have all the "Experts" making too big of a claim about whichever new drop.
I love the way you frame OpenAI. It scales into everything I’ve heard about the company.
With regards to your first point, the reason why I started “Deep Learning with the Wolf,” is that I was absolutely fascinated by the concept of neural networks imitating the human brain.
In a bit of irony, I was interviewing for OpenAI and on the “suggestions for interviewees” pages is recommended reading. The Deep Learning textbook by Ian Goodfellow and Yoshua Bengio and Aaron Courville topped the list. It’s an expensive textbook, but I located an affordable copy in decent shape at a used bookseller.
My timing was bad with OpenAI as they fired their CEO that week. It was a huge spate of drama and I never heard back from my recruiter. I didn’t take it personally. I was deep into reading the Deep Learning textbook. The math calculations looked like Big Bang theory stuff to me, but the rest of it made sense.
I found myself wanting to write about it, because I understand concepts better when I write about them. So, I created a list of key terms that intrigued me from the book. Some of the terms just sounded so cool. “Gradient dissent.” “Back propagation.” “Stochastic Parrot.” That was the start of my newsletter. I wasn’t sure if anyone was going to read it, but I wanted to write about it. But, the newsletter kept gaining readers on LinkedIn and eventually I brought it over here. Now, I offer unique content to Substack- like the 30 Days of AI Papers.
But, it all comes back to being fascinated by deep learning- teaching computers to process data in a way inspired by the human brain.
I try to be fair. In the case of OpenAI, Musk was guilty, too. He was still there when they did the, "This could be extremely dangerous, don't know if we should allow public to see!" routine when they were about to introduce their second drop back when.
Currently, Altman is dangerous because he is a con man. NO ONE (all these experts) went after fact you could see the swarm coming (eliminating gates) that has him claiming need for $500b simultaneous with producing a tool that will force ALL retail outlets to have to pay the piper to survive...
At the first AGI conference (Memphis, '08), I wondered aloud of the need to disperse nonimportant data in order to achieve the functioning "general"... ResNet delivers ...
Wow, attending the very first AGI conference in ’08 must have been incredible!
I love the point about filtering out the non-important data to reach something truly general—that resonates with what we’re seeing in the classic ResNet work and beyond.
(Fun fact: I once thought I’d made it to the inaugural INET conference in ’95—turns out the real first one was in ’89. History always has an earlier starting line than we think!)
Thanks for sharing this memory, David—insightful context like this is exactly why I’m running the “Wolf Reads AI” series.
It was something... run by Goertzel and meeting folks like Hall, Adams and more was really something!
At that time, was looking at AGI from the media side and have been studying/promoting ever since... had no idea definition I support (Machine becomes Itself) has been watered down like it has...
When you say watered down, where do you think it was going when you went to that first conference? Or, what was the general vision that was presented to the media back then?
The conception of the machine becoming conscious and the need to develop neural network imitating human brain was really something.
The OpenAI definition (always followed by "it's here") related to $$ produced scaled up is not right. Plus we have all the "Experts" making too big of a claim about whichever new drop.
I love the way you frame OpenAI. It scales into everything I’ve heard about the company.
With regards to your first point, the reason why I started “Deep Learning with the Wolf,” is that I was absolutely fascinated by the concept of neural networks imitating the human brain.
In a bit of irony, I was interviewing for OpenAI and on the “suggestions for interviewees” pages is recommended reading. The Deep Learning textbook by Ian Goodfellow and Yoshua Bengio and Aaron Courville topped the list. It’s an expensive textbook, but I located an affordable copy in decent shape at a used bookseller.
My timing was bad with OpenAI as they fired their CEO that week. It was a huge spate of drama and I never heard back from my recruiter. I didn’t take it personally. I was deep into reading the Deep Learning textbook. The math calculations looked like Big Bang theory stuff to me, but the rest of it made sense.
I found myself wanting to write about it, because I understand concepts better when I write about them. So, I created a list of key terms that intrigued me from the book. Some of the terms just sounded so cool. “Gradient dissent.” “Back propagation.” “Stochastic Parrot.” That was the start of my newsletter. I wasn’t sure if anyone was going to read it, but I wanted to write about it. But, the newsletter kept gaining readers on LinkedIn and eventually I brought it over here. Now, I offer unique content to Substack- like the 30 Days of AI Papers.
But, it all comes back to being fascinated by deep learning- teaching computers to process data in a way inspired by the human brain.
I try to be fair. In the case of OpenAI, Musk was guilty, too. He was still there when they did the, "This could be extremely dangerous, don't know if we should allow public to see!" routine when they were about to introduce their second drop back when.
Currently, Altman is dangerous because he is a con man. NO ONE (all these experts) went after fact you could see the swarm coming (eliminating gates) that has him claiming need for $500b simultaneous with producing a tool that will force ALL retail outlets to have to pay the piper to survive...