Glad you brought up a minor but intriguing element of this. Quora has a CEO? Really? WTF. Especially a CEO who sits on the board of OpenAI? Considering that Quora generates most of its content the old-fashioned way, by plagiarism, and far as I can tell, generates revenue the way catfish get their dinner, by eating off of whatever shit sinks to the bottom, I was like "wait, what? there's a CEO and he's actually inside an important company's board?"
Seriously, I was thinking the same thing. I got warned and booted from Quora years ago. I am not sure what my transgression was, they never really told me, but I spent too much time there poring out product management advice (my profession) and I was blindsided when they booted me.
Good one and thank you for your link to Farrell. Subscription-worthy.
Recently, Sam Altman received a Hawking Fellowship for the OpenAI Team and he spoke for a few minutes followed by a Q&A (available on YouTube). In that session he was asked what are important qualities for 'founders' of these innovative tech firms. He answered that founders should have ‘deeply held convictions’ that are stable without a lot of ‘positive external reinforcement’, ‘obsession’ with a problem, and a ‘super powerful internal drive’. They needed to be an 'evangelist'. . TED just released Ilya Sutskever’s talk and you see it there too. There is more than a shallow relation with the early WIRED-culture you have been writing about.
The optimists *and* the pessimists on both sides of the OpenAI fight have in common that they are strong believers in "AGI is Nigh!". They both are evangelists' for this belief. What results is a world of disciples and followers of that belief, including ones who wield lots of money (mostly on the side of the optimists, which is not that weird as I recall psychological research that entrepreneurs do overestimate positive outcomes and underestimate risk).
The psychological and sociological/cultural side of the current GPT-fever is far more important and telling than the (really limited) technical reality. Short summary of where we are now: quantity has its own certain quality, but the systems may be impressive, we humans are impressionable. (https://erikjlarson.substack.com/p/gerben-wierda-on-chatgpt-altman-and)
Well this is a silver lining I was not expecting. I agree with the too-good-to-be-true vibes from Altman, and hope that a Microsoft-driven AI (let a thousand Clippy jokes bloom!) gets treated with a little more skepticism. Better than a bunch of loons obsessing about an undefinable threat in the future (when we all have flying cars), anyway. Those guys would trigger the collapse of civilization by accident.
“It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable.” — Ada Lovelace ~1842
I'm not an accountant, but also consider this for the OpenAI biz model taking compute credits from "800lb partner in the room" MSFT:
"Like-kind exchanges -- when you exchange real property used for business or held as an investment solely for other business or investment property that is the same type or “like-kind” -- have long been permitted under the Internal Revenue Code. Generally, if you make a like-kind exchange, you are not required to recognize a gain or loss under Internal Revenue Code Section 1031. If, as part of the exchange, you also receive other (not like-kind) property or money, you must recognize a gain to the extent of the other property and money received. You can’t recognize a loss." --IRS.gov
Could MSFT have been capable of flipping/spinning off the IP via osmosis from its interaction with O-AI at the end of this saga sans cap gain?
What is the old adage?? "The more things change, the more they stay the same..." Anyone reading my post here would say, "But, nothing is the same, Rob." Sure. I offer another nonsensical quip; AI = us; us = you and I, we = current society." We are all sitting in the front row seat of the World? Really, we are. The future is doable, and we can do this. Crazy, chaotic?? Yes, of course. Do we expect anything less in this stage of our Civilization? What year is it? We are eternal optimists? We all know we must "hang in there," and "stay tuned in." And take breaks from our screens when we need for the sake of our sanity. Thank you for another great article. Please stay safe, and well, Everyone.
Excellent summary. I think when the depth psychologists examine the kind of exaggerated, self-concious and intentional perfection you describe, they label it Ego Inflation. It's not a good thing, especially when it becomes the totality of how people or organizations operate.
Glad you brought up a minor but intriguing element of this. Quora has a CEO? Really? WTF. Especially a CEO who sits on the board of OpenAI? Considering that Quora generates most of its content the old-fashioned way, by plagiarism, and far as I can tell, generates revenue the way catfish get their dinner, by eating off of whatever shit sinks to the bottom, I was like "wait, what? there's a CEO and he's actually inside an important company's board?"
Seriously, I was thinking the same thing. I got warned and booted from Quora years ago. I am not sure what my transgression was, they never really told me, but I spent too much time there poring out product management advice (my profession) and I was blindsided when they booted me.
Another day, another grifter exposed. More regulation for tech now.
....yet another Stanford "drop out"....lol.
Nailed it.
Good one and thank you for your link to Farrell. Subscription-worthy.
Recently, Sam Altman received a Hawking Fellowship for the OpenAI Team and he spoke for a few minutes followed by a Q&A (available on YouTube). In that session he was asked what are important qualities for 'founders' of these innovative tech firms. He answered that founders should have ‘deeply held convictions’ that are stable without a lot of ‘positive external reinforcement’, ‘obsession’ with a problem, and a ‘super powerful internal drive’. They needed to be an 'evangelist'. . TED just released Ilya Sutskever’s talk and you see it there too. There is more than a shallow relation with the early WIRED-culture you have been writing about.
The optimists *and* the pessimists on both sides of the OpenAI fight have in common that they are strong believers in "AGI is Nigh!". They both are evangelists' for this belief. What results is a world of disciples and followers of that belief, including ones who wield lots of money (mostly on the side of the optimists, which is not that weird as I recall psychological research that entrepreneurs do overestimate positive outcomes and underestimate risk).
The psychological and sociological/cultural side of the current GPT-fever is far more important and telling than the (really limited) technical reality. Short summary of where we are now: quantity has its own certain quality, but the systems may be impressive, we humans are impressionable. (https://erikjlarson.substack.com/p/gerben-wierda-on-chatgpt-altman-and)
This is a great take. The Californian Ideology is, unfortunately, alive and well.
Well this is a silver lining I was not expecting. I agree with the too-good-to-be-true vibes from Altman, and hope that a Microsoft-driven AI (let a thousand Clippy jokes bloom!) gets treated with a little more skepticism. Better than a bunch of loons obsessing about an undefinable threat in the future (when we all have flying cars), anyway. Those guys would trigger the collapse of civilization by accident.
You’re going to enjoy this one: https://ea.rna.nl/2023/11/26/artificial-general-intelligence-is-nigh-rejoice-be-very-afraid/
“It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. In considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable.” — Ada Lovelace ~1842
I'm not an accountant, but also consider this for the OpenAI biz model taking compute credits from "800lb partner in the room" MSFT:
"Like-kind exchanges -- when you exchange real property used for business or held as an investment solely for other business or investment property that is the same type or “like-kind” -- have long been permitted under the Internal Revenue Code. Generally, if you make a like-kind exchange, you are not required to recognize a gain or loss under Internal Revenue Code Section 1031. If, as part of the exchange, you also receive other (not like-kind) property or money, you must recognize a gain to the extent of the other property and money received. You can’t recognize a loss." --IRS.gov
Could MSFT have been capable of flipping/spinning off the IP via osmosis from its interaction with O-AI at the end of this saga sans cap gain?
What is the old adage?? "The more things change, the more they stay the same..." Anyone reading my post here would say, "But, nothing is the same, Rob." Sure. I offer another nonsensical quip; AI = us; us = you and I, we = current society." We are all sitting in the front row seat of the World? Really, we are. The future is doable, and we can do this. Crazy, chaotic?? Yes, of course. Do we expect anything less in this stage of our Civilization? What year is it? We are eternal optimists? We all know we must "hang in there," and "stay tuned in." And take breaks from our screens when we need for the sake of our sanity. Thank you for another great article. Please stay safe, and well, Everyone.
This was a good read, thank you.
worth noting quora owns poe.com, an openai competitor / customer / frenemy
Excellent summary. I think when the depth psychologists examine the kind of exaggerated, self-concious and intentional perfection you describe, they label it Ego Inflation. It's not a good thing, especially when it becomes the totality of how people or organizations operate.