Achieving real value depends on moving from theoretical models to production-level accuracy. This shift requires investment in data tagging and development.

Specialised, fine-tuned models consistently outperform generic counterparts such as ChatGPT, and outperform alternative approaches such as zero-shot learning and prompt-based methods, as shown in studies.

OpenAI emphasises the need for at least 100 labelled examples per class for effective fine-tuning. This means that data development is not a mundane task. It is something that sets in motion a powerful cycle.

The more fine-tuning you do, the more adaptable and powerful your model becomes, with the focus shifting from the base model to continuous refinement.

The last mile, characterised by meticulous fine-tuning and data development, is where the true value of AI is realised – owning and mastering the data is key to driving AI capabilities.

The future lies in personalised models (“GPT-You”) rather than a one-size-fits-all approach like GPT-X.