Generative AI and the Importance of Original Thought

I have a test to share with you – actually, it’s a test for large language models (LLMs), and Simon Willison has done it for us. With all the performance metrics available for assessing LLMs, Simon sought to create a “purer” test – one that LLMs couldn’t be trained on and had not seen before – to gain another perspective on their capabilities today.

The question he posed? Draw an SVG of a pelican riding a bicycle. I’ve copied a subset of the responses from Simon’s tests below – and as you’ll see, none performed well at all.

The reason these images appear worse than my drawing skills (and that’s saying something) is because LLMs haven’t been exposed to the content, such as SVG code depicting a pelican on a bicycle, so it was not included in the training data sufficiently enough to be relevant.

More notably, the results indicate they didn’t do a great job at reasoning and iterating over their work to refine and improve the end result – even GPT4-o1, which, at the time of writing, boasts the best reasoning capabilities available in the market.

Why does this happen?

LLMs are trained on a vast corpus of existing information – massive amounts of text data, such as books, academic papers, forums, articles, and websites, to help them understand and generate human-like language. LLMs learn to identify patterns and relationships within this data, much like how people learn from reading and experience. The result – an LLM – is essentially a massive cloud of numbers representing words and paragraphs, hence the name “large language model.”

To recap, an LLM functions by predicting words – it’s essentially a much more advanced version of autocomplete when typing. Rather than merely predicting the next word, they can generate paragraphs of content. In many cases, these outputs are impressive. However, as we saw in the example above, they don’t perform particularly well when asked to create an SVG showing a pelican on a bike because the content is not known to them.

LLMs excel at synthesizing what they already know from the training corpus or what is injected into a prompt. As they advance, for example, to Ph.D. level, it’s reasonable to expect that you can ask any question based on broadly known information and receive a decent response. For many white-collar tasks today – writing briefs, analyzing reports, responding to tenders, and so forth – generative AI is likely to replace (or at the very least, speed up) many time-consuming part of jobs they do today.

That said, LLMs still can’t create genuinely novel thoughts because they cannot synthesize information they don’t already know.

In practical terms, this means they won’t help you learn a new software language with code suggestions that has only recently been published online, design a completely new product without additional input, or understand your internal business processes without you sharing them first. Generative AI is unaware of any of these scenarios because they were not included in the training dataset.

That’s the entire basis for the RAG approach, and prompt engineering, as these provide more information to the AI model that it can then process at the time the request is made. Hence the importance of getting search right in generative AI apps.

What does this mean?

The biggest opportunity in business remains the same and is timeless: creating new things and adapting to ever-changing market conditions to meet evolving customer demands.

Success comes from identifying unmet needs, envisioning new possibilities, listening to customers, and staying relevant. This requires critical thinking to analyze challenges deeply, asking questions, and harnessing imagination to design bold solutions. These qualities are particularly crucial in today’s world, where disruption is constant, and innovation is the key to staying ahead.

Does this usher in a re-emergence of humanities subjects in education?

As AI continues to influence industries, particularly generative AI in white-collar work over the next few years, I expect to see an increasing recognition of the importance of studying humanities subjects alongside STEM subjects in higher education to prepare these workers for the future. Humanities disciplines (philosophy, literature, history, politics, laws, etc.) explore fundamental questions about meaning, morality, identity and the human condition. These disciplines train individuals to analyze arguments, question assumptions, and engage with diverse perspectives. Put simply, these skills help us see the world not just as it is today, but what it could be.

Equally importantly, the humanities help us see the world in shades of grey, an increasingly important skill for those working in technology. While much discussion rightly focuses on “responsible AI,” there’s a nuanced viewpoint that I feel requires further exploration regarding “ethical AI.”

Is a perfectly responsible AI implementation – for example, one that theoretically achieved zero bias, offers complete safety control, and so forth – an ethical implementation if it automates hundreds of jobs away? What if all the affected workers are located in a regional town or developing economy, disrupting an entire community – as often is the case in call centers and BPO contracts? Or perhaps, “that’s just business” is an appropriate answer - it depends on one’s point of view.

A worked example - AI in a professional services company

Through dialogues like The Republic or Meno, Plato showed how asking the right questions could guide individuals to reflect, challenge assumptions, and gain insights that transcend surface-level answers. The essence of his work demonstrates that the process of inquiry is often more valuable than immediate answers because it leads to greater understanding and wisdom.

For a large professional services company considering implementing a generative AI solution, the key to success lies in recognizing that “the right question is usually more important than the right answer.”

Generative AI can be incredibly powerful, but its value depends on how it’s used to address specific client needs or internal challenges.

Instead of simply asking, "How can we automate document creation?" a more transformative question might be, "How can we reimagine how clients access insights and recommendations from our expertise?" This shifts the focus from a narrow technical problem to solve, to a broader exploration of creating value, fostering innovation, and staying ahead in a competitive landscape without a predetermined solution in mind.

Design thinking workshops certainly try to capture this, albeit with the rush to put down thoughts on sticky notes, and motivations, experience, diversity and role status of each participant, sometimes it’s challenging to reach beyond the most obvious conclusions and follow up actions in a handful of hours.

By framing the right questions, technologists and business professionals alike can better align AI solutions with their strategic goals and ensure the technology complements human creativity and critical thinking in their day to day approach to work.

STEM skills will remain valuable

STEM skills obviously play a crucial role in working through scenarios to reach the right practical solution, once the right questions have been asked - so fear not if you’ve just completed your computer science degree.

For instance, in the professional services firm example considering automation above, STEM skills help a software developer develop a methodology to model different outcomes, with a structured approach to test and measure how AI might enhance efficiency, accuracy, and client satisfaction across a range of scenarios.

This analytical mindset ensures that the final solution is not only technically feasible but also tailored to practical, real-world needs, optimizing both operational efficiency and client value.

Takeaways

I remember when Google search first launched in the late 1990s. Nearing the end of my degree I wondered if my skills would be obsolete given all the answers anyone would need were just a few search queries away. In many respects generative AI is ushering in the next iteration of knowledge discovery and consumption, and thus I feel confident we will continue to adapt to this new paradigm.

Although AI is rapidly progressing and will continue to do so, expect to see organisations and white-collar workforces place even more value on critical thinking and synthesising information for their competitive advantage.

Be curious and ask lots of questions, as conversations and the understanding of different perspectives are what stimulate original thought and genuine innovation.

In higher education I’d expect to see humanities subjects scattered through technical degrees which encourage “bigger picture thinking” skills, even if the subjects don’t at first glance appear to have obvious commercial application. Blended humanities & technical double degrees such as Arts/Commerce and Law/Science will of course continue too.

Most importantly though, remember that LLMs cannot generate new ideas nor truly listen to customers – they excel at summarizing information and help us synthesize it.

Original thought will still be essential, and asking the right questions is the best way to begin. So go forth, be curious, and seek answers that enlighten and challenge your thinking. While I expect generative AI to continue to advance rapidly, I still believe that the human elements of imagination, intuition and personal connection will continue to be valued.


Next
Next

MYOB Report: Data & AI Innovation in the ANZ Mid-Market