This book is middling at best. From a literary perspective it's terrible. It reads like GPT-3.5 wrote it. Now from an introduction to a new idea perspective and understanding how AI will affect society, its __fine__. The book is full of contradictions. Suleyman regularly points out how we've __never__ successfully constrained a disrupting technological innovation and then says we NEED to here. I mean, absolutely absurd stuff. Ostensibly ends on an optimistic note, but is actually much more nihilistic.
Hmm, didn't we do this for cloning? I remember hearing about this on Lex Fridman when he interviewed Max Tegmark.
If I recall correctly, the entire world is in agreement that cloning is illegal, and even that some people in China (could be just one) even went to prison for it.
Cloning doesn't scale like code. Code information wants to be free and can be reproduced easily with mouse clicks, but cloning is a lot of hard work. All the instructions for cloning could be leaked on the internet and you still have to build a lab, hire people that know what they are doing, etc. And who can profit from making a bunch of damn kids in this era?
“Everybody wanna be a bodybuilder, but don't nobody want to lift no heavy ass weight.” -Ronnie Coleman
Eh, we've done a partially successful job constraining nuclear weapons beyond the US and the USSR/Russia. And, so far, we've constrained their use in war.
The begging for regulatory capture in the AI business is so egregious that I think the government should nationalize all the AI companies 'to keep us safe' but really to prevent these shysters from making money.
> How often do you 'vet' authors of non-fiction books prior to reading the book?
I misread the question as 'How' rather than 'How often', but I'll repeat Jerry Weinberg's heuristic. He'd wait until three people he trusted recommended a book before reading it, as a way to filter for quality. He used it as a way to manage his limited time ("24 hours, maybe 60 good years" - Jimmy Buffett), but it also works to weed out books not worth mentioning.
I read all the negative/neutral GoodReads comments of the book (and author's other books if I'm not familiar with the author, and maybe Wikipedia if I want to dig deeper).
99% of books I learn about from recommendations (HN, blogs, other books), and the pattern I see is that the source/recommender are usually at the similar "popsci" level.
I sometimes get it wrong. In most cases I just waste a few hours. The worst mistake was taking Why We Sleep to heart before I read the rebuttal. I still think it's fine, but more on a Gladwell level.
Im Suleyman's case, I recognize the name from Inflection shenanigans, so already have a bias against the book to start with.
I basically don't read much pop sci anymore because they are almost always bad, and the science from the good ones is better read unfiltered through papers and the journals do a good job of making authors declare conflicts of interest.
It is the worst when these books are not written by people with scientific training because they are more likely to make logical errors or use motivated reasoning to push a narrative.
I always do a little bit of research on the author online beforehand. If I'm going to read a non-fiction book I need to know if the author is credible.
I do believe Bill Gates might be a little bit biased here. I read the book some months ago, and while I can't say it's a bad book, I wouldn't call it a favorite either.
If it's something i have no grounding in then understanding the authors' potential biases is useful.
If it's something I'm relatively familiar with or close enough that i think I'll be able to understand the application of potential biases in realtime then i don't usually bother.
This issue is sometimes somewhat alleviated by reading multiple sources for the same/similar information.
Funny you should say that because I just read his Wikipedia page and a couple of articles about him. He and Gates are successful salesmen and managers who dabbled in coding when they were young. I don't expect any insight from them about the effect of technologies on society or anything like that. The idea that they are intellectuals or scholars is laughable.
Really, you judge their intellectuality from reading Wikipedia and a few articles?
What if I tell you that you are shallow at best and incapable of critical thinking, from the comments you made on HN? Does that sound ridiculous on my end?
> The historian Yuval Noah Harari has argued that humans should figure out how to work together and establish trust before developing advanced AI. In theory, I agree. If I had a magic button that could slow this whole thing down for 30 or 40 years while humanity figures out trust and common goals, I might press it. But that button doesn’t exist. These technologies will be created regardless of what any individual or company does.
Is that true, though? Training runs for frontier models don’t happen without enormous resources and support. You don’t run one in your garage. It doesn’t happen unless people make it happen.
Is this really a harder coordination problem than, say, stopping climate change, which Gates does believe is worth trying?
Climate change is the byproduct of the desired outcome, energy.
Advanced AI, if you buy Yuval's argument is the threat in and of itself.
So Climate Change is a problem that can be 'solved' while the main goal is pursued. This is ideologically consistent with gates investment in terrapower. Where as AI isn't because the desired outcome is the threat not a bi-product.
So your questions a bit flawed fundamentally.
As for gate's point is it true, almost certainly yes, the game theory is peruse and lie that you aren't, or openly pursue. You can't ever not pursue because you do not and cannot have perfect information.
Imagine how much visibility china would demand from the US to trust it was doing nothing, far more than they could give, and vice versa.
Do you think the us is going to give its adversaries tracking and production information its most advanced chips? It would never, and if they did why would other powers trust it if theres every reason to lie.
Climate change is the byproduct of the desired outcome, energy. Advanced AI, if you buy Yuval's argument is the threat in and of itself.
That's just a matter of how you slice your concepts. You could say burning oil is a threat in and off itself, for example. Or oppositely, "the threat of bad AI" is a byproduct of "useful AI".
So Climate Change is a problem that can be 'solved' while the main goal is pursued.
I don't think many people trying to solve climate change are trying to end industrial society. They are trying to find an energy source that doesn't produce Co2 pollution.
AI research is global and of strategic value with both the US and China competing. I don't see one stopping research while the other cracks ahead. Similar problems exist with curbing co2 which hasn't gone very well to date.
Powerful, and wealthy in particular, people are more interested in the latter than the former. When vast wealth is built off of maintaining the status-quo there is very little incentive for implementing changes that threaten the status-quo "organically."
I get hope when I read this essay from Harpers [0]. But I actually think it will be more like Paolo Bacigalupi's "The Water Knife" [1].
We have a PhD student ecologist in the family and they've lost all hope. It's quite depressing. Doesn't look like the next round of world governments have it high on the agenda also.
Isn’t it sad , that our government and companies quickly solved AI energy problems by bootstrapping Nuclear again, but in 30 years can’t make a single unified decision on anything else
It'll be ironic if AI solves climate change by just existing, i.e. by giving the world a compelling reason to stop hiding behind austerity and anti-growth bullshit rhetoric, and to double down on energy production instead, thus forcing us to tackle the "how do we do it without cooking ourselves" problem up front, since it cannot be deferred.
do you mean stop as in reverse the trend by limiting bad behavior or do you mean stop like climate engineering. there is always the option to use science to modify the climate on purpose.
Let's face it. Our world is currently filled with rogue states waging pointless wars, spying on their own citizens, launching cyberattacks, seeding disinformation outside their borders, etc. If they want to make it happen they will. It is a damn hard coordination problem.
Sam Altman convincing the ultra-rich that AI is the next best place to put their too much money after they've run out of other places to put it is not anywhere near the spirit of what he meant by people "working together." He's saying that while in a dysfunctional society, AI will only magnify the dysfunction.
Fighting climate change is a means towards the end of having a livable environment, developing AI is a means towards the end of having a better society. But, whereas fixing the environment would be its own automatic benefit, having AGI would not automatically improve the world. Something as seemingly innocuous and positive as social networking made a lot of things worse.
This rings true. The internet exposed the dysfunctional wealth inequality (look at all the billionaires it created) , social networks exposed the dysfunctional human communications and manipulation (ie Jan 6 was a love fest) , and now AI will just take both of those to the next level. Extreme wealth, extreme poverty and mass manipulation of people
If anybody things AI will cure cancer or something grandiose like that they are propping up their stock portfolio.
AI will be the harbinger of the last wave of human growth before we all end up killing each other over the price of eggs or whatever else AI regurgitation machine decides
I just finished listening to it on audible. It is certainly thought provoking, but full of contradictions as others have mentioned. Namely that this technology cannot be contained, and yet that it must be contained is pretty doom and gloom. The prognostications about artificial intelligence are hardly as scary as the ones made around genetic sequencing — that you can buy a device for 30k that will print pathogens and viruses for you out of your garage. That’s some scary stuff.
You can buy plasmids and make whatever bacteria you want for a few decades now. AI may help, but it certainly doesn't cost $30k to cause mischief.Pretty sure I learned that in Bio 102
I just want to echo this here and in a bit different wording: AI will provide step-by-step guides on how to make viruses that just about any idiot can follow, for very cheap, and in under a year time frame.
I really really hope I'm missing something big here.
Having a step-by-step guide and actually being able to follow it are two very different things. If you follow YouTube channels like The Thought Emporium you'll see how hard it is just to duplicate existing lab results from published sources in biology. To go a step further and create new dangerous things without also getting yourself killed in the process is a pretty tall order.
We should be talking about the more abstract problem of asymmetric defense and offense.
Imagine that nukes were easy to make with household items. That would be a scenario where offense is easy but defense is hard. And we would not exist as a species anymore.
Once a hypothetical technology like this is discovered one time, it's not possible to put the genie back in the bottle without extreme levels of surveillance.
We got lucky that nukes were hard to make. We had no idea that would be the case before nuclear physics was discovered, but we played Russian Roulette and survived.
Regardless, he's certainly been in the right places to understand AI trends and Gates' write-up makes it sound like an intriguing distillation. Thanks for posting!
My favorite book on AI is Sutton’s “reinforcement learning”. Looking just at the url I knew this would be some pop-sci tripe but leaving this comment here in case people want something other than what they can tout on twitter/X.
It's a book written for non experts and clearly labeled as such. Bill Gates likely knows more about AI than you do and yet he recommends a book that normal people can understand. There may be a lesson here.
>Given that The coming wave assumes that technology comes in waves and these waves are driven by the insiders, the solution it proposes is containment—governments should determine (via regulation) who gets to develop the technology, and what uses they should put the technology to. The assumption seems to be that governments can control access to natural choke points in the technology. One figure the book offers is how around 80% of the sand used in semiconductors comes from a single mine—control the mine and you control much of that aspect of the industry. This is not true though. Nuclear containment, for example, relies more on peer pressure between nation states, than regulation per se. It’s quite possible to build a reactor or bomb in your backyard. The more you scale up these efforts, the more it’s likely that the international community will notice and press you to stop. Squeezing on one of these choke points it more likely to move the activity somewhere else, then enable you to control it.
>...
>At its heart this is a book by and insider arguing that someone is going to develop this world-changing technology, and it should be them.
> 80% of the sand used in semiconductors comes from a single mine—control the mine and you control much of that aspect of the industry.
Tangent, but I suspect the reality is that as soon as you cut off production in that mine the math changes such that bunch of other potential mines that weren't profitable before suddenly become profitable now. The end result is just slightly more expensive sand, which is presumably only a small portion of the entire cost of semiconductors itself.
While I've enjoyed the small bursts of wisdom from many of Bill Gates shorter talks, I haven't found anything noteworthy in his written reviews and books. His viewpoints are often bizarre and radical. I still chuckle when remembering reading his conviction that ISDN would become the dominant Internet technology before the year 2000 in his book "The Road Ahead" (1995). It even seemed bizarre back then to me.
Of all the recent books I've read about AI, this was by far the worst.
The Singularity is Nearer, Life 3.0, and A Brief History of Intelligence were much much much better imho
With all due respect - I would have hoped to see a list of other AI books reviewed with a recommendation. Currently the article seems like a Preface for the book - "The Coming Wave".
By the late 1990s, Microsoft's competition (including Netscape and Apple) were nearly dead. In fact, the browser that Apple originally shipped with OS X was M$ Internet Explorer.
Gates was several months late to the web, but it's not like he missed the boat.
I went from Apple to Microsoft in 1995 partially because Microsoft was so far ahead of Apple on the Internet. At the time, Apple was entirely concerned with promoting eWorld. (Ironic because Apple had a /8 IP allocation and was processing something like a quarter of Usenet traffic well before this.) They both had to get out of their walled garden “compete with AOL” models, but MSFT did it faster.
absolutely everybody in software was talking about the internet in 1995, and for the most part already on the internet, and companies were already pivoting to the internet left and right. It made me realize how discretionary is all the work we do in these large industries, because whatever we were working on in 1993 and 1994 no longer mattered, we were now working on enabling whatever assets we had to be compatible with the internet.
I’m not sure I’d agree with pivoting being a signal that prior work was discretionary. I think, in a lot of cases the strategic pressure from competition becoming more efficient or productive by capturing Internet traffic could have been a real reason as well.
no, not everybody ... pre-existing sectors with commercial applications, get this, existed before the Internet. Lots of companies had their verticals, and as mentioned elsewhere, the "open net" was not at all the same
In an early chapter he talks about how well LLMs knowing medical and legal information but doesn't mention how it makes things up... Was hoping he'd discuss the challenges and hurdles right away...
God damn it. So far we've got the Harry Potter fan fiction, the My Little Pony fan fiction, the pop-sci book Gates is talking about, and one actual book, Reinforcement Learning: An Introduction by R. Sutton.
We need something that's technical enough to be useful, but not based on outdated assumptions about the technology used to implement AI.
It's an okay book but there isn't really anything in it that you couldn't infer after you've read the first 10%. A lot of common sense warnings about risks from AI, bioweaopns, cyberattacks etc but it's all very generic. There's no chapter in it that I found had any genuine insight. An interesting chapter would have been "what if I'm completely wrong and all we get is a bunch of meme generators and the next bubble", but that never appears to be a possibility.
It's oddly enough the case with a lot of books that end up on Gates recommended lists. I saw someone recently say, maybe a bit too mean, that we might make it to AGI because Yuval Noah Harari keeps writing books that more and more look like they're written by ChatGPT and it's not entirely untrue for a lot of the stuff Gates recommends.
Suleyman seems to have got ahead in AI basically by being mates with Demis Hassabis and joining him in the company founding. He doesn't seem to have achieved much in actual AI, more things like setting up a "Muslim Youth Helpline" and being a "policy officer on human rights for Ken Livingstone."
The best AI book of 2024 is a messy fluff-piece ghost-written by GTP3.5 and stamped with the name of someone specifically tasked with making Bill Gates even richer? Bullshit.
I’ll remind everyone that Gates was a long-time friend of Jeffrey Epstein, long after it was well-known what the man’s true business was. We shouldn’t let Gates’s money and past technical contributions launder his reputation. Like most other things he does, this is PR designed to prop up his already impossible wealth.
Bill Gates is not some great thinker. He was born on third base and in the right place at the right time, and then absolutely ruthless once Microsoft had power in the industry. As a thought leader he is extremely mediocre.
Apologies if I've missed your point, but if what you're hinting at is that we should take his ideas about the social impact of AI less seriously because he's not deep into writing Rust code all day, that's just laughable.
After he was demoted at DeepMind for poor performance as a manager, that is. Suleyman is no technologist. He's a marketing promoter. He attended one year of college before dropping out to help sell DeepMind.
You have to wonder what's going on in Gates' head these days to not recognize the lack of substance in such a book, and in its author.
Far better books on the possible futures of modern AI are Stuart Russell's "Human Compatible" or Brian Christian's "The Alignment Problem", both of which predate the boom in LLMs but still anticipate their Achilles heel -- the inability to control what they learn or how they will use it.
I wish he would read 'The Israel lobby' by John Mearsheimer and Stephen Walt. Gates has some responsibility as an elite billionaire to safegaurd our democracy.
> I’ve always been an optimist, and reading The Coming Wave hasn’t changed that.
I'm not an optimist, but I fail to see the dangers of AI. I think it's more likely we will be wiped out by nuclear war, or climate change, or the collapse of biodiversity and ecosystems that result in worldwide famines, before AI is advanced enough to constitute any kind of threat to our existence.
The dangers are not in AI, per se. Like nuclear fission and fusion, the danger is in how the technology 1) may be misused by corporations that are clueless to or disinterested in the damage it can inflict, and 2) surely will be dis-regulated by the increasingly stupid and malignant boobs infecting Washington.
"Hey just wanted to let you know I started autónoma - only started - I'm at page 45 now but I'm really digging this - love getting into a story about AI and the image of the scene in Japan in the game - super great - and the scene with the coy wolves- I'm totally in."
The novel has a take on AGI and ASI that diverges from our fear of machines that will destroy/control/enslave humanity. I'd be grateful for any other alpha readers who'd like to give me their thoughts on the story, especially with respect to the economic ramifications. See my profile for contact details.
This book is middling at best. From a literary perspective it's terrible. It reads like GPT-3.5 wrote it. Now from an introduction to a new idea perspective and understanding how AI will affect society, its __fine__. The book is full of contradictions. Suleyman regularly points out how we've __never__ successfully constrained a disrupting technological innovation and then says we NEED to here. I mean, absolutely absurd stuff. Ostensibly ends on an optimistic note, but is actually much more nihilistic.
Hmm, didn't we do this for cloning? I remember hearing about this on Lex Fridman when he interviewed Max Tegmark.
If I recall correctly, the entire world is in agreement that cloning is illegal, and even that some people in China (could be just one) even went to prison for it.
If you could use cloning to make lots of money more labs would be doing it.
You probably could make lots of money from human cloning, if it weren't illegal?
If I could clone myself, I'd make a few bug fixes.
I think the profitability of human cloning is significantly lower and slower than AI.
Cloning doesn't scale like code. Code information wants to be free and can be reproduced easily with mouse clicks, but cloning is a lot of hard work. All the instructions for cloning could be leaked on the internet and you still have to build a lab, hire people that know what they are doing, etc. And who can profit from making a bunch of damn kids in this era?
“Everybody wanna be a bodybuilder, but don't nobody want to lift no heavy ass weight.” -Ronnie Coleman
>we've __never__ successfully constrained a disrupting technological innovation and then says we NEED to here
These aren't mutually exclusive.
To put it mildly.
Eh, we've done a partially successful job constraining nuclear weapons beyond the US and the USSR/Russia. And, so far, we've constrained their use in war.
Nuclear energy as a whole is restricted to very few countries.
> It reads like GPT-3.5 wrote it
These days it would be surprising if an author didn't generate at least some of the text with AI, or direct an AI to improve the prose.
The begging for regulatory capture in the AI business is so egregious that I think the government should nationalize all the AI companies 'to keep us safe' but really to prevent these shysters from making money.
If it really costs tens of billions of dollars to make then it’s trivial to make a decision as a society about whether or not to build it.
Nothing that costs ten billion dollars gets built without the explicit or implicit consent of the public.
Internationally? If it’s a big enough deal the deterrent is strategic counter value.
We’re doing this deliberately. Maybe that’s good, maybe it’s bad, but it’s on purpose and it’s dishonest to say otherwise.
The author is the head of Microsoft AI. Gates might not be entirely unbiased here :)
edit: as a semi-related question for folks here. How often do you 'vet' authors of non-fiction books prior to reading the book?
> How often do you 'vet' authors of non-fiction books prior to reading the book?
I misread the question as 'How' rather than 'How often', but I'll repeat Jerry Weinberg's heuristic. He'd wait until three people he trusted recommended a book before reading it, as a way to filter for quality. He used it as a way to manage his limited time ("24 hours, maybe 60 good years" - Jimmy Buffett), but it also works to weed out books not worth mentioning.
In terms of 'how often', pretty often.
I read all the negative/neutral GoodReads comments of the book (and author's other books if I'm not familiar with the author, and maybe Wikipedia if I want to dig deeper).
99% of books I learn about from recommendations (HN, blogs, other books), and the pattern I see is that the source/recommender are usually at the similar "popsci" level.
I sometimes get it wrong. In most cases I just waste a few hours. The worst mistake was taking Why We Sleep to heart before I read the rebuttal. I still think it's fine, but more on a Gladwell level.
Im Suleyman's case, I recognize the name from Inflection shenanigans, so already have a bias against the book to start with.
I basically don't read much pop sci anymore because they are almost always bad, and the science from the good ones is better read unfiltered through papers and the journals do a good job of making authors declare conflicts of interest.
It is the worst when these books are not written by people with scientific training because they are more likely to make logical errors or use motivated reasoning to push a narrative.
I always do a little bit of research on the author online beforehand. If I'm going to read a non-fiction book I need to know if the author is credible.
I do believe Bill Gates might be a little bit biased here. I read the book some months ago, and while I can't say it's a bad book, I wouldn't call it a favorite either.
depends on the subject matter.
If it's something i have no grounding in then understanding the authors' potential biases is useful.
If it's something I'm relatively familiar with or close enough that i think I'll be able to understand the application of potential biases in realtime then i don't usually bother.
This issue is sometimes somewhat alleviated by reading multiple sources for the same/similar information.
YMMV
Funny you should say that because I just read his Wikipedia page and a couple of articles about him. He and Gates are successful salesmen and managers who dabbled in coding when they were young. I don't expect any insight from them about the effect of technologies on society or anything like that. The idea that they are intellectuals or scholars is laughable.
Really, you judge their intellectuality from reading Wikipedia and a few articles?
What if I tell you that you are shallow at best and incapable of critical thinking, from the comments you made on HN? Does that sound ridiculous on my end?
>The author is the head of Microsoft AI. I'm disappointed but not surprised.
> The historian Yuval Noah Harari has argued that humans should figure out how to work together and establish trust before developing advanced AI. In theory, I agree. If I had a magic button that could slow this whole thing down for 30 or 40 years while humanity figures out trust and common goals, I might press it. But that button doesn’t exist. These technologies will be created regardless of what any individual or company does.
Is that true, though? Training runs for frontier models don’t happen without enormous resources and support. You don’t run one in your garage. It doesn’t happen unless people make it happen.
Is this really a harder coordination problem than, say, stopping climate change, which Gates does believe is worth trying?
Climate change is the byproduct of the desired outcome, energy. Advanced AI, if you buy Yuval's argument is the threat in and of itself.
So Climate Change is a problem that can be 'solved' while the main goal is pursued. This is ideologically consistent with gates investment in terrapower. Where as AI isn't because the desired outcome is the threat not a bi-product.
So your questions a bit flawed fundamentally.
As for gate's point is it true, almost certainly yes, the game theory is peruse and lie that you aren't, or openly pursue. You can't ever not pursue because you do not and cannot have perfect information.
Imagine how much visibility china would demand from the US to trust it was doing nothing, far more than they could give, and vice versa.
Do you think the us is going to give its adversaries tracking and production information its most advanced chips? It would never, and if they did why would other powers trust it if theres every reason to lie.
Climate change is the byproduct of the desired outcome, energy. Advanced AI, if you buy Yuval's argument is the threat in and of itself.
That's just a matter of how you slice your concepts. You could say burning oil is a threat in and off itself, for example. Or oppositely, "the threat of bad AI" is a byproduct of "useful AI".
So Climate Change is a problem that can be 'solved' while the main goal is pursued.
I don't think many people trying to solve climate change are trying to end industrial society. They are trying to find an energy source that doesn't produce Co2 pollution.
AI research is global and of strategic value with both the US and China competing. I don't see one stopping research while the other cracks ahead. Similar problems exist with curbing co2 which hasn't gone very well to date.
Genuinely asking, are we still trying to "stop" the climate change any more or are we in the "easing the incoming pain" phase already?
Powerful, and wealthy in particular, people are more interested in the latter than the former. When vast wealth is built off of maintaining the status-quo there is very little incentive for implementing changes that threaten the status-quo "organically."
I get hope when I read this essay from Harpers [0]. But I actually think it will be more like Paolo Bacigalupi's "The Water Knife" [1].
[0]: https://harpers.org/archive/2021/06/prayer-for-a-just-war-fi...
[1]: https://en.wikipedia.org/wiki/The_Water_Knife
We have a PhD student ecologist in the family and they've lost all hope. It's quite depressing. Doesn't look like the next round of world governments have it high on the agenda also.
Isn’t it sad , that our government and companies quickly solved AI energy problems by bootstrapping Nuclear again, but in 30 years can’t make a single unified decision on anything else
It'll be ironic if AI solves climate change by just existing, i.e. by giving the world a compelling reason to stop hiding behind austerity and anti-growth bullshit rhetoric, and to double down on energy production instead, thus forcing us to tackle the "how do we do it without cooking ourselves" problem up front, since it cannot be deferred.
It won’t start moving until the boomer generation are out, I think. Let’s just hope we can do better when it’s our turn I guess.
do you mean stop as in reverse the trend by limiting bad behavior or do you mean stop like climate engineering. there is always the option to use science to modify the climate on purpose.
Let's face it. Our world is currently filled with rogue states waging pointless wars, spying on their own citizens, launching cyberattacks, seeding disinformation outside their borders, etc. If they want to make it happen they will. It is a damn hard coordination problem.
Sam Altman convincing the ultra-rich that AI is the next best place to put their too much money after they've run out of other places to put it is not anywhere near the spirit of what he meant by people "working together." He's saying that while in a dysfunctional society, AI will only magnify the dysfunction.
Fighting climate change is a means towards the end of having a livable environment, developing AI is a means towards the end of having a better society. But, whereas fixing the environment would be its own automatic benefit, having AGI would not automatically improve the world. Something as seemingly innocuous and positive as social networking made a lot of things worse.
This rings true. The internet exposed the dysfunctional wealth inequality (look at all the billionaires it created) , social networks exposed the dysfunctional human communications and manipulation (ie Jan 6 was a love fest) , and now AI will just take both of those to the next level. Extreme wealth, extreme poverty and mass manipulation of people
If anybody things AI will cure cancer or something grandiose like that they are propping up their stock portfolio.
AI will be the harbinger of the last wave of human growth before we all end up killing each other over the price of eggs or whatever else AI regurgitation machine decides
I just finished listening to it on audible. It is certainly thought provoking, but full of contradictions as others have mentioned. Namely that this technology cannot be contained, and yet that it must be contained is pretty doom and gloom. The prognostications about artificial intelligence are hardly as scary as the ones made around genetic sequencing — that you can buy a device for 30k that will print pathogens and viruses for you out of your garage. That’s some scary stuff.
They said the same thing about the anarchist's cookbook 30 years ago.
You can buy plasmids and make whatever bacteria you want for a few decades now. AI may help, but it certainly doesn't cost $30k to cause mischief.Pretty sure I learned that in Bio 102
I just want to echo this here and in a bit different wording: AI will provide step-by-step guides on how to make viruses that just about any idiot can follow, for very cheap, and in under a year time frame.
I really really hope I'm missing something big here.
Having a step-by-step guide and actually being able to follow it are two very different things. If you follow YouTube channels like The Thought Emporium you'll see how hard it is just to duplicate existing lab results from published sources in biology. To go a step further and create new dangerous things without also getting yourself killed in the process is a pretty tall order.
We should be talking about the more abstract problem of asymmetric defense and offense.
Imagine that nukes were easy to make with household items. That would be a scenario where offense is easy but defense is hard. And we would not exist as a species anymore.
Once a hypothetical technology like this is discovered one time, it's not possible to put the genie back in the bottle without extreme levels of surveillance.
We got lucky that nukes were hard to make. We had no idea that would be the case before nuclear physics was discovered, but we played Russian Roulette and survived.
Having a step-by-step guide and actually being able to follow it are two very different things.
exactly. we'll see how far it goes. it might be a more elaborate draw the rest of an owl guide, like:
1. obtain uranium-238
2. fire up the centrifuge for isotope separation
3. drop yellowcake into it
3. collect uranium-235
...
You missed the part where you turn uranium metal into a gas for the centrifuge to work in the first place
I remember people on HN having a less than favorable sentiment about Mustafa Suleyman:
https://news.ycombinator.com/item?id=39757330
Regardless, he's certainly been in the right places to understand AI trends and Gates' write-up makes it sound like an intriguing distillation. Thanks for posting!
My favorite book on AI is Sutton’s “reinforcement learning”. Looking just at the url I knew this would be some pop-sci tripe but leaving this comment here in case people want something other than what they can tout on twitter/X.
It's a book written for non experts and clearly labeled as such. Bill Gates likely knows more about AI than you do and yet he recommends a book that normal people can understand. There may be a lesson here.
This reads like a PR written blog post recommending a PR written book
Gates has posted about books he recommended for as long as I can remember. Maybe look at his past recommendations before judging?
Interesting critical review at:
https://www.goodreads.com/book/show/90590134-the-coming-wave
>...
>Given that The coming wave assumes that technology comes in waves and these waves are driven by the insiders, the solution it proposes is containment—governments should determine (via regulation) who gets to develop the technology, and what uses they should put the technology to. The assumption seems to be that governments can control access to natural choke points in the technology. One figure the book offers is how around 80% of the sand used in semiconductors comes from a single mine—control the mine and you control much of that aspect of the industry. This is not true though. Nuclear containment, for example, relies more on peer pressure between nation states, than regulation per se. It’s quite possible to build a reactor or bomb in your backyard. The more you scale up these efforts, the more it’s likely that the international community will notice and press you to stop. Squeezing on one of these choke points it more likely to move the activity somewhere else, then enable you to control it.
>...
>At its heart this is a book by and insider arguing that someone is going to develop this world-changing technology, and it should be them.
> 80% of the sand used in semiconductors comes from a single mine—control the mine and you control much of that aspect of the industry.
Tangent, but I suspect the reality is that as soon as you cut off production in that mine the math changes such that bunch of other potential mines that weren't profitable before suddenly become profitable now. The end result is just slightly more expensive sand, which is presumably only a small portion of the entire cost of semiconductors itself.
[dead]
While I've enjoyed the small bursts of wisdom from many of Bill Gates shorter talks, I haven't found anything noteworthy in his written reviews and books. His viewpoints are often bizarre and radical. I still chuckle when remembering reading his conviction that ISDN would become the dominant Internet technology before the year 2000 in his book "The Road Ahead" (1995). It even seemed bizarre back then to me.
TED talk by the author, probably with similar content to the book https://youtu.be/KKNCiRWd_j0
For those negative on the book, do you have a better suggestion for me to read? Maybe I should just ask ChatGPT.
Of all the recent books I've read about AI, this was by far the worst. The Singularity is Nearer, Life 3.0, and A Brief History of Intelligence were much much much better imho
> and leave you better prepared to ride the coming wave, instead of getting swept away by it
There are waves that cannot be ridden.
With all due respect - I would have hoped to see a list of other AI books reviewed with a recommendation. Currently the article seems like a Preface for the book - "The Coming Wave".
Some folks really need to read Thibault Prévost's "Les prophètes de l'IA", in order to comprehend the motivation behind work like Suleyman's.
From the guy who didn't see the internet coming ...
Gates wrote his 'internet tidal wave' email mid 1995 (https://wired.com/2010/05/0526bill-gates-internet-memo/) which was only two years after Berners-Lee publicly released Mosaic.
By the late 1990s, Microsoft's competition (including Netscape and Apple) were nearly dead. In fact, the browser that Apple originally shipped with OS X was M$ Internet Explorer.
Gates was several months late to the web, but it's not like he missed the boat.
I went from Apple to Microsoft in 1995 partially because Microsoft was so far ahead of Apple on the Internet. At the time, Apple was entirely concerned with promoting eWorld. (Ironic because Apple had a /8 IP allocation and was processing something like a quarter of Usenet traffic well before this.) They both had to get out of their walled garden “compete with AOL” models, but MSFT did it faster.
The other problem with the Mac in those years was that there was no decent web browser.
Windows MSIE eventually surpassed the usability, functionality and popularity of Netscape, but Microsoft's Mac version of MSIE did not.
In the late 1990s, many websites did not render or function correctly on Macintosh.
absolutely everybody in software was talking about the internet in 1995, and for the most part already on the internet, and companies were already pivoting to the internet left and right. It made me realize how discretionary is all the work we do in these large industries, because whatever we were working on in 1993 and 1994 no longer mattered, we were now working on enabling whatever assets we had to be compatible with the internet.
I’m not sure I’d agree with pivoting being a signal that prior work was discretionary. I think, in a lot of cases the strategic pressure from competition becoming more efficient or productive by capturing Internet traffic could have been a real reason as well.
no, not everybody ... pre-existing sectors with commercial applications, get this, existed before the Internet. Lots of companies had their verticals, and as mentioned elsewhere, the "open net" was not at all the same
Btw - Has anybody here read Anil Ananthaswamy's Why Machines Learn : The Elegant Math behind Modern AI ?
Would like to get a technical review of this.
In an early chapter he talks about how well LLMs knowing medical and legal information but doesn't mention how it makes things up... Was hoping he'd discuss the challenges and hurdles right away...
https://www.fimfiction.net/story/62074/friendship-is-optimal
God damn it. So far we've got the Harry Potter fan fiction, the My Little Pony fan fiction, the pop-sci book Gates is talking about, and one actual book, Reinforcement Learning: An Introduction by R. Sutton.
We need something that's technical enough to be useful, but not based on outdated assumptions about the technology used to implement AI.
Is he recommending the book because MS hired Suleyman?
Did MS hire Suleyman in March 2024 because Gates liked the book?
It's an okay book but there isn't really anything in it that you couldn't infer after you've read the first 10%. A lot of common sense warnings about risks from AI, bioweaopns, cyberattacks etc but it's all very generic. There's no chapter in it that I found had any genuine insight. An interesting chapter would have been "what if I'm completely wrong and all we get is a bunch of meme generators and the next bubble", but that never appears to be a possibility.
It's oddly enough the case with a lot of books that end up on Gates recommended lists. I saw someone recently say, maybe a bit too mean, that we might make it to AGI because Yuval Noah Harari keeps writing books that more and more look like they're written by ChatGPT and it's not entirely untrue for a lot of the stuff Gates recommends.
Can't take this seriously knowing that this is the same Mustafa Suleyman who...
- Was basically acqui-hired by Microsoft from Pi AI (seems a little biased to recommend a book from one of your own)
- Left DeepMind due to allegations of bullying (https://en.wikipedia.org/wiki/Mustafa_Suleyman#DeepMind_and_...)
- Allegedly yelled at OpenAI employees because they weren't sharing technologies frequently enough (https://www.nytimes.com/2024/10/17/technology/microsoft-open...)
But what do I know, maybe if I read it and regurgitate its contents in a not-too-obvious way I can get an AI policy job.
Suleyman seems to have got ahead in AI basically by being mates with Demis Hassabis and joining him in the company founding. He doesn't seem to have achieved much in actual AI, more things like setting up a "Muslim Youth Helpline" and being a "policy officer on human rights for Ken Livingstone."
The best AI book of 2024 is a messy fluff-piece ghost-written by GTP3.5 and stamped with the name of someone specifically tasked with making Bill Gates even richer? Bullshit.
I’ll remind everyone that Gates was a long-time friend of Jeffrey Epstein, long after it was well-known what the man’s true business was. We shouldn’t let Gates’s money and past technical contributions launder his reputation. Like most other things he does, this is PR designed to prop up his already impossible wealth.
How can anyone take Mustafa Suleyman seriously
Bill Gates is not some great thinker. He was born on third base and in the right place at the right time, and then absolutely ruthless once Microsoft had power in the industry. As a thought leader he is extremely mediocre.
You're right, I'm sure you could have done what he did (or even better!) if you'd been in his shoes.
the author of that book is a non-technical co-founder of DeepMind who currently leads Microsoft AI efforts.
Apologies if I've missed your point, but if what you're hinting at is that we should take his ideas about the social impact of AI less seriously because he's not deep into writing Rust code all day, that's just laughable.
Well yes, we don't really know what this thing is yet, and the only ones who kind of understand it are the researchers themselves.
no, my point is that before reading a book it is often helpful to know who the author is and what is their track record
this is from Mustafa Suleyman, don't bother opening the link.
If people don't know him, he is the classic impostor who just goes by contributing nothing to the field, but investing big in pr and bots.
news flash - he got a promotion, staff and budgets since then
After he was demoted at DeepMind for poor performance as a manager, that is. Suleyman is no technologist. He's a marketing promoter. He attended one year of college before dropping out to help sell DeepMind.
You have to wonder what's going on in Gates' head these days to not recognize the lack of substance in such a book, and in its author.
Far better books on the possible futures of modern AI are Stuart Russell's "Human Compatible" or Brian Christian's "The Alignment Problem", both of which predate the boom in LLMs but still anticipate their Achilles heel -- the inability to control what they learn or how they will use it.
[dead]
I wish he would read 'The Israel lobby' by John Mearsheimer and Stephen Walt. Gates has some responsibility as an elite billionaire to safegaurd our democracy.
> I’ve always been an optimist, and reading The Coming Wave hasn’t changed that.
I'm not an optimist, but I fail to see the dangers of AI. I think it's more likely we will be wiped out by nuclear war, or climate change, or the collapse of biodiversity and ecosystems that result in worldwide famines, before AI is advanced enough to constitute any kind of threat to our existence.
The dangers are not in AI, per se. Like nuclear fission and fusion, the danger is in how the technology 1) may be misused by corporations that are clueless to or disinterested in the damage it can inflict, and 2) surely will be dis-regulated by the increasingly stupid and malignant boobs infecting Washington.
An alpha reader for my hard sci-fi novel wrote:
"Hey just wanted to let you know I started autónoma - only started - I'm at page 45 now but I'm really digging this - love getting into a story about AI and the image of the scene in Japan in the game - super great - and the scene with the coy wolves- I'm totally in."
The novel has a take on AGI and ASI that diverges from our fear of machines that will destroy/control/enslave humanity. I'd be grateful for any other alpha readers who'd like to give me their thoughts on the story, especially with respect to the economic ramifications. See my profile for contact details.
What is the game is this one of those "gamer" things? You're 14 and 10 years late respectively for Ready Player One and Friendship is Optimal.
Oh, and there's a bunch of even older stuff going back to the 80s.