Technology, Top News

BBC 4.1 experiment proves our fears about AI are unfounded… for now

AUNTIE BEEB’S audacious mini-season of programming about artificial intelligence (AI)finished last night with its centrepiece – ‘Made by Machine: When AI Met The Archive’ a demonstration of short sequences of telly constructed by a neural network from the millions of hours of BBC archive.

The programme, presented by Dr Hannah Fry with interjections from a disembodied head, began by making purring noises about how safe all this is. Fry invites us to watch “the limits of its learning unfold”, whilst the head, which we believe one addresses as “BBC 4.1” purrs “Relax, it’s going to be fine”.

Sure enough, as we’ve seen in other programmes in the season, the fear of the rise of the machines seems to go in waves, and as we reach new milestones in the 21st century, the fear has returned, fuelled by doomsayers like the late Stephen Hawking and potty-mouthed entrepreneur Elon Musk warning that mankind could end up obliterated by intelligent machines.

The reassurances were quite unnecessary in the event – in fact, the limits of what the BBC’s R&D department could get its box of tricks to do were fairly self-evident.

Billed as an ‘experiment’ there can be little doubt that’s what it was, with sequences created that, were it not for the context, would have been borderline unwatchable at times.

Three sequences were created for the show – one based on object recognition, one on text recognition, one on movement in the clip. Finally, a fourth stitched the three techniques together to make what BBC 4.1 considered to be as “BBC Four”-like as possible.

This was one of the main limitations of the show. Free reign over the entire archive was possible, but not desirable, as the results needed to not send viewers darting for the remote too much.

In the event, the clips came from BBC Four’s archive, but that gave some contrast itself, thanks to so much of the channel’s content being based on the archive anyway.

Luckily, in order to make what could have seemed like someone channel hopping so engaging, the screen was split, allowing us to see exactly what the neural network’s thought process was.

It wasn’t very sound. From the very first clip of cartoon band Gorillaz, it was clear that our dear chum has a lot more learning to do.

Describing lead singer “2-D” as ‘A clock on a pole’ begins an often hilarious stream of miscategorising. Any men with long hair were labelled as women, anyone with bags under their eyes was recognised as wearing glasses. A man with nothing in his hands was ranked as holding a mobile.

Let’s be clear – none of these are criticisms of a fascinating experiment, but in conducting it, we realise that however fast neural networks learn, they’ve got masses more to make anything remotely like a programme as we would recognise it.

But the biggest problem we saw with this section was that, aside from the ‘thought loops’ which anyone who has ever fiddled with Swiftkey will recognise, one of the objects most likely to be picked up appeared to be the BBC Four DOG (logo) in the corner of the screen.

The fact is, whilst BBC 4.1 can recognise likenesses – in words or objects and find something that literally follows on, in terms of understanding what it sees and inferring from context, it’s completely rubbish.

It’s worth remembering that machine learning, by its very nature means that over time, it will improve, but it only takes a few pixels, or an object moving too fast to grasp, or a homophone word, or a pun, to completely throw it off.

For word recognition, the wealth of subtitled (closed-captioned) shows in the archive offered up a whole new use case that the creators of Ceefax could never have dreamed of.

The third sequence, in which the AI attempted to match clips based on movement and rules of engagement for how a show looks, was the most bewildering of all, with the AI working so fast that it was impossible to tell what its thought process actually was.

Finally, the three techniques were combined to create something that looked not dissimilar to the others, to be fair, but with the combination of the three techniques and its ability to ‘learn’, it’s this that will, eventually, very eventually, become something akin to a workable way of assisting programme editors, though by this evidence, certainly not replacing them in our lifetime.

Of the relatively few criticisms on social media last night were the embodiment of BBC 4.1 as a virtual host, which a few people found cheapening to the process, and the way the screen was laid out to look like something out of a sci-fi adventure.

Our only question there is – yes, but how would YOU have done it, in a way that made it accessible to children of a lesser geek?

Others noted that the AI had mostly picked clips of white, middle-aged men – but given that this is BBC Four’s core audience, that’s not so much a surprise or a complaint, as an observation on the world at large.

Indeed, considering that most of the media is obsessed with celebrity, one thing BBC 4.1 can’t do right now is recognise anyone specific, beyond “man” or “woman” or sometimes “cat”.

It repeatedly chose clips of Brian Eno without knowing who he was, or relating it to whether that has anything to do with another favourite topic, passenger cruise liners.

Because the fact is, making tv is a creative process, and at this stage, AI is still all about logic. 

Nobody, Auntie or otherwise had a right to expect a fully formed machine made TV show. Although IBM has experimented with compiling Wimbledon highlights this way, that’s actually a much easier task than trawling 20 years of BBC Four shows, let alone the whole archive.

What was incredible however was that we could glimpse into the mind of the very thought process of an AI with reference points that we could understand. It’s something that has never been achieved before, though Janelle Shane’s ideas for naming metal bands, guinea pigs and paint colours provide some insight.

But although it would be easy to class the experiment as a failure, sometimes you have to go back and remind yourself of the objective. It wasn’t about making a fully formed solution. It was about demonstrating what was possible, and more importantly, what was it.

The question now is, what if BBC 4.1 is allowed to keep trawling the archive, quietly. The really interesting bit will be if the exercise gets repeated at regular intervals to see how machine learning develops and at what speed. Plus of course, training it with new skills, such as identifying specific people as we’ve suggested, and better understanding that a pigs trotter and Del Trotter don’t necessarily marry up.

However, thanks to this experiment, and the wider AITV season as a whole which includes a whole range of new and archive shows, including documentaries about the rise of the robots filmed 50 years ago, it becomes obvious that our fears then and our fears now are the same, and largely unfounded. The difference isn’t so much that AI has got smarter – though in relative terms they are. It’s that they’ve become smaller, more democratised, and cheaper, giving us access throughout our daily lives.

BBC 4.1 has been an amazing experiment and a solid start to a technique that, at this stage, seems a long way from perfection. But we’ll only know that if BBC 4.1 returns as BBC 4.2, older, wiser and hopefully with a better sense of what stuff actually means, not just what it is.

It’s not always the easiest watch, but if you have even a passing interest in AI, then this is essential viewing. μ

“Made by Machine: When AI Met The Archive” and the rest of the AI TV season is available on BBC iPlayer until October 6th.

Further reading

Source : Inquirer

Previous ArticleNext Article
Founder and Editor-in-Chief of 'Professional Hackers India'. Technology Evangelist, Security Analyst, Cyber Security Expert, PHP Developer and Part time hacker.

Send this to a friend