Discussion about this post

User's avatar
Gerben Wierda's avatar

Good stuff, as usual.

I do think, though, that too much is expected in this post from 'generative AI'. Generative AI has serious trouble with producing *meaningful* results. The results are well-structured and they fit the subject, but the results are not trustworthy, nor is there any sign that they will be. Reading OpenAI's article on GPT Few-shot learning (https://arxiv.org/pdf/2005.14165.pdf) for instance, it is clear that — other than producing language that is well-structured and 'fitting' — the results on being *trustworthy* in terms of content/meaning remain very poor. Generative AI seems magical, not because the systems are intelligent, but because we humans not that much. We are easily fooled/misled.

See https://ea.rna.nl/2022/12/12/cicero-and-chatgpt-signs-of-ai-progress/

Expand full comment
Rachel Rossos's avatar

As someone who regularly relies on data, this rings so true to me: "...it’s more like only Google is positioned to know. Only Google has the capacity to know. But given the magnitude of their traffic, the company only notices those things that staff have been assigned staff to look for. "

Thank you for making that observation. It's not enough to have the data - someone has to actually be looking at and analyzing it. It always comes back to people's time and what they are focused on.

Expand full comment
5 more comments...

No posts