Contents

ai-tarkvara

AI Avatar Teele 2026: Unrecognizable in a Year

Jarmo Tuisk4 min read
AI Avatar Teele 2026: Unrecognizable in a Year

How much can an AI avatar evolve in a year? Enough that the TV host has to remind viewers – this isn't a real person.

For the second year in a row, Kanal2 (Estonia's major TV channel) invited us to create an AI avatar named Teele for their New Year's Eve broadcast. Last year, this was a completely new challenge for us – we wrote about that experience here. This time, we knew what to expect. Or so we thought.

What Changed in a Year?

Short answer: almost everything. The longer answer follows.

Realism Has Jumped Dramatically

With last year's Teele, it was obvious you were looking at AI. This year? The host Romi Hasa had to repeatedly remind viewers that Teele wasn't a real person. This isn't just a technical achievement – it's a marker of where AI avatar technology has arrived.

It's not just the facial expressions and lip movements that are more accurate. The entire presence – the glances, small head nods, breathing pauses in speech – all of it together creates something that feels very human.

The Workflow Is Significantly Faster

Last year, creating Teele took about a week of intensive work. This year, we finished in half a day, with notably better results. This wasn't just due to our growing experience – it's more that the tools have simply improved that much.

The Thinking Is Deeper and More Human

Last year, we used ChatGPT as Teele's "brain." This year, we chose Claude Opus. Opus can reason more deeply and incorporates real facts into its responses. Not that it's vastly different from GPT5.2's capabilities. But Claude models have a certain "more human" vibe, and Teele's predictions became much more substantive and interesting as a result.

The Technologies We Used

One of the key lessons from last year was that achieving the best results requires combining different specialized tools. But this year, we needed fewer of them than before. This made the work much faster and more convenient. This year, our toolkit looked like this:

1. Thinking – Claude Opus

Teele's "brain" – generating responses – was handled by Anthropic's Claude Opus model. We created a custom style for it that gave Teele a distinctive "voice" – confident, slightly humorous, and forward-looking. Claude's ability to account for real facts and events made the predictions much more believable.

2. Appearance – Krea and Nano Banana

For Teele's visual appearance, we used Krea.ai and Nano Banana models. We used last year's images as reference so Teele would be recognizably the "same person," but with a slightly fresher, more contemporary look. The result was realistic and perfectly suited for TV format.

3. Voice – ElevenLabs v3

A major step forward compared to last year! The ElevenLabs v3 model now offers excellent Estonian language support. Additionally, we could control the emotion in the voice – adding breathing pauses, emphasis, and other nuances that make speech much more natural. Last year, finding a good Estonian voice was one of our biggest challenges – now that problem is essentially solved.

4. Avatar – HeyGen

In HeyGen, we combined voice and appearance to create a moving, talking avatar. The platform's lip-sync technology has improved significantly over the year – lip movements are more accurate, and movement and facial expressions more natural.

5. B-roll – KlingAI

Beyond the main clips where Teele speaks, we needed filler shots and transitions. Using KlingAI, we created clips where Teele "waits for questions" – calmly looking at the camera, blinking, moving her head slightly. We also created "drone shots" and other visual elements that gave the whole production a cinematic quality.

Watch the Result

You can watch Teele's full prediction here:

We also discussed Teele's creation on Kanal2's morning show:

https://duoplay.ee/9644/?ep=389 (from 1:42:28)

What's Next?

Where will we be a year from now? Most likely, we won't be able to tell the difference between AI and real humans anymore. And if things go well, next year we'll be able to show a real-time avatar – one that can react and respond in live broadcast.

Technology is evolving rapidly. The question is no longer whether AI avatars will become indistinguishable from humans – the question is what we'll do with that capability.

heygenelevenlabsclaudekrea-aiklingai