I recently watched a film not because I was interested in the story or plot, but because I was fascinated by the technology used to make the film.
Gemini Man (IMDB, Wikipedia) is about an assasin who meets a genetically engineered cloned generated from his own DNA.
To make the film, staff used AI to generate a younger Will Smith face and projected it, digitally, on current-day Will Smith’s body. The AI generated face was created using footage from Will Smith’s earlier career.
That’s right – the younger Will Smith in the video is essentially Deep Fake footage of Smith using footage of his earlier self.
I can’t help but think about what this might mean in the future. Is it possible for Will Smith to continously generate ageless versions of himself, in the future, and own the rights? Is a future Will Smith even required to generate ageless “Will Smith” films? Is Will Smith going to be an ever-present actor in films going forward because we are all curious about how the character story evolves across films and franchises?!
Think about the James Bond franchise. Or Jason Bourne. Or Indiana Jones.
In the future, will we have entirely AI-generated movie personalities that are owned by companies becuase they don’t represent any one actor/actress specifically? As in, will there be idealised AI-generated action stars or romantic leads?
And what about awards? Can an AI-generated lead character win an award for best actor or actress?
It is just so fascinating. What a time to be involved in AI and ML.
I recently came across a wonderful suite of materials for introducing statistical learning:
Hastie, et al’s free textbook (link to the PDF can be found on this page).
The accompanying lecture videos – 15 hrs in total – freely available through YouTube (outline of, and links to, the videos here).
Additional slides provided by professor Al Sharif (here), including PDF documents of R scripts and explanations for a wide range of topics covered in the book.
To give folks a feel for the content, it addresses many of the techniques presented at the University of Queensland’s graduate-level Machine Learning course. It also addresses many of the techniques I used, along with colleagues, at Shell to help optimise their massive coal-seam gas business in Brisbane, Australia.
In the aftermath of the global financial crisis some experts proclaimed that a key issue was a need to ‘make banking boring again.’ That, essentially, money and drugs and Ferraris were the crux of the motivation behind the risk taking that led to the GFC.
Relatedly, I tend to believe that a lot of boring engineering issues actually generate a lot of value. But they aren’t sexy. So they often get ignored by relative newcomers. Eventually those newcomers will get bit by the risks associated with skipping the unsexy bits. Then they, too, will write relatively obscure blog posts about them.
Until then, I’d like to highlight a key issue for helping ensure data analytics projects are set up for success: clear definition of the problem to be solved.
Note that it is ‘problem definition’ (not ‘tool definition’). Which means that the problem definition does not dictate if machine learning will or will not be used. It just requires that the problem be stated so engineers can determine effective options for solving it.