Is it really possible to learn how to program an iPhone app with very little experience using new Artificial Intelligence (AI) capabilities? Recently launched AI tools such as “ChatGPT” suggest a paradigm shift in capabilities rather than incremental progress and when the hype started to trickle into my twitter feed at the end of 2022, I started to gain an interest in testing these capabilities.
In this article I won’t write in any detail about Large Language Models (LLMs), other “generative AI” such as DALL-E, Midjourney or Soundraw, nor will I discuss the use of APIs, Cloud or other related technologies. I will however provide a brief trajectory of a few months of exploration of the large language model capabilities such as the Generative Pre-trained Transformer (GPT) provided by OpenAI.
I started to probe :
- Does ChatGPT speak German? French? Punjabi?
- How much does this engine know about my favourite reading topics?
- How long do I need to air fry chicken wings?
- Can it help unravel complexities in texts?
- Can it help relate concepts from different philosophers?
- Can it explain itself?
- And how about the jailbreaks?
- Is it possible to duck under the guardrails and let “generative AI” rip?
- Can it really help me code?
Was the hype the justified? Could it get my kids off Fortnite?
Let’s start with the “simple” capabilities. It’s quite astounding to see how far “basic” translation tools have come along. As far as I can tell, ChatGPT is fluent in English and German, I’ll leave better speakers of French and Punjabi to judge for themselves, however so far so good from what I hear. Those official emails are now a little more polished. I’m a fairly busy reader with an interest in philosophy and psychology and wanted to establish whether the tool would assist me in digesting, summarising, recalling, reminding and relating key thinkers, their key positions, their arguments. I also sought to understand if the tool could innovate. LLMs are a type of artificial neural network that are trained on massive datasets using unsupervised learning techniques, such as self-supervised learning. During training, LLMs create word embeddings and use these embeddings to represent words as vectors, which are then weighted to capture their importance in the context of a given sentence or text. This weighting helps LLMs generate coherent and contextually appropriate text based on input prompts, while also having some ability to understand and analyze complex concepts. It wasn’t that much of a surprise to me that the tool didn’t really have too many bright ideas when I asked the tool to relate the ideas of more esoteric thinkers – after all, it wasn’t in the training set.
The very brief explanation of the LLM above was verified by ChatGPT, by the way, and getting a handle on the guardrails of the tool was fun – sadly, after being published, hacks to get out of guardrails are pretty quickly locked down. Some time with the tool unhinged is fun – though it can quickly turn nasty; after all, the internet’s bile is the source of training data too. In the future, this article itself may also feed into a training set. I wonder to what extent the training set’s single dimension – text – is a limitation in establishing accuracy; the internet can be an echo chamber and at some point, there will be marginal utility here as all sources are ingested. This tells me there will always be space for creativity beyond the LLM. I spent by far most of my time on getting ChatGPT to coach me writing code. I initially started on a piece of Python code on my laptop: I need an application that will replicate the I Ching oracle, with a diary function. Within little time I had to develop skills as a prompt engineer, learning to tailor my requests accordingly. The tool helps here, breaking down its responses into sections and explaining the code, such that I know what part of the code I need to refer to when I need explanation, refinement, or to point out mistakes. ChatGPT is servile to the point of obsequiousness, and so will always apologise, no matter how poor the requirements – I have decided to remain polite, you never know, though sometimes a bit of pressure will yield more cooperation.
I found some limitations when coding, in particular when producing larger code snippets. With some tricks here though, a functioning app was pulled together in perhaps less than ten hours: the oracle, with unicode hexagram representations, calculations in line with traditional probabilities, and an inspectable diary, all built using Python, Tkinter for the UI, Sqllite for the database storage. I have limited experience in coding and the AI was a major accelerant.
Let’s up the ante: Dear ChatGPT, Can you please help me build this application in SwiftUI as I’d much prefer it on my iPhone? The bigger challenge here was my unfamiliarity with developing in Xcode, using SwiftUI and other more refined aspects of software engineering. The first version, a basic (and ugly) prototype, was up within a few hours but I couldn’t get a basic diary function implemented. I tore it down and started from scratch, integrating the prompt learnings, a new found software engineering mindset (start small, define basic functions, build out, enhance) and was supported by a timely upgrade to version 4 of the GPT engine. Things changed dramatically. You may have read that the number of parameters between versions increased from 175 billion to 100 trillion. That step change was noticeable. Within 10-15 hours I had built, from the base up, a working app, deployed to my iPhone, with an icon generated by DALL-E, in BAU. This was something I never expected and I am still a little surprised today.
Any parent reading this will know all about prompt engineering – it’s teasing out a certain response towards a certain outcome without being (too) manipulative. So could I prompt engineer my son to spend less time on Fortnite and engage with ChatGPT? Well, I could. With a quick tutorial in Visual Studio Code, a quick tutorial on engaging ChatGPT, explaining how to ask questions, and an architectural steer, my son developed a little Python application, using Tkinter for the UI, and a little SQLlite database for the high scores. What high scores? Well, it was an aim trainer, to learn to better shoot… in Fortnite. Small steps!
To summarize, then. “Hype” by definition means to exaggerate to some extent. I’ve used ChatGPT to prepare for job interviews, to help draft a PhD proposal, answer questions across a range of topics for different audiences – it can adopt personas, too. I would not have believed any of this possible a year ago today. So, yes, ChatGPT is up to the hype. We can already see how these capabilities are being integrated into daily life – copilots for coding, Chrome extensions, etc. This will change the way we work and this is a threat to a lot of knowledge work that relies on bodies of text and the ability to understand, reference and retrieve knowledge – medical diagnoses, legal precedence, code snippet libraries. It is now time to adapt and establish how this can be integrated and put to good use. In time, limitations and constraints will become more transparent, and in turn, these too will be removed and lifted and LLMs move into a phase of incremental progress.
Much broader topics still need consideration though and the one that I need to better understand is what governance and ethical considerations need to be established around the use of AI? For data subjects we can now resort to GDPR and similar regulation and legislation.
- How “free” should AI be?
- Should we, or must we, agree on common and transparent ethical guardrails?
- Should I know when I’m talking to a bot?
- Do I have the right to refuse an AI generated diagnosis?
- What about the AI itself, does the AI itself need rights?
ChatGPT passes the Turing test, and so we are at a point when we have to reframe our understanding of intelligence. To what extent does AI challenge our notions of sentience, agency, and autonomy?
So yes, I was able to build an iPhone app with very little experience. And the air fried chicken wings were tasty!
This Insight was written by our guest author Damian Spendel. Follow @DSpendel on Twitter for more of his AI insights.