Tag Archives: movie

AI In Film Industry

Reading Time: 2 minutes

Film industry is becoming more and more advanced every day, and with the help of still developing artificial intelligence (AI) it is becoming a transformative force, both encouraging creative endeavors and raising ethical concerns. One company at the forefront of this AI revolution is StoryFit, whose CEO, Monica Landers, navigates the delicate balance between technological advancement and the essence of human storytelling.

A StoryFit Revolution

StoryFit’s innovative use of AI extends beyond script analysis; it delves into the intricate nuances of audience connections with narratives and characters. The company compiles data on storytelling elements, offering insights that help studios in making important decisions like script acquisition, character promotion, and/or movie’s adaptations of books.

Originally designed to assist publishers in shifting through book submissions, StoryFit redirected its focus to the film industry, becoming the drive behind successful films and TV series. The company’s AI technology evaluates a script’s marketability, providing valuable data on the potential success of high-risk film investments.

Unveiling Character Dynamics

One of StoryFit’s remarkable achievements lies in its ability to analyze character traits using AI. By assessing audience responses, the technology determines the heroism or relatability of main characters, aiding creative professionals in identifying and improving potential imbalances.

The company’s application of AI extends to beloved TV series like “The Queen’s Gambit” and HBO’s “The Last of Us,” where it measures characters’ strength and originality. This data-driven approach not only celebrates exceptional storytelling but also serves as a tool to navigate the high-stakes film industry.

AI’s Influence Beyond Storytelling

As AI permeates various facets of filmmaking, concerns arise about its impact on content creation, especially in nonfiction and documentary spaces. The filmmaking industry is in dire need of advocacy for protections against AI and the establishment of ethical guidelines in decision-making processes.

The dark side of AI shows in the form of black box algorithms that dictate popularity, influencing which stories get told and how. Social media platforms, particularly TikTok, reward content tailored to algorithms designed to trigger dopamine release. In Hollywood, producers secure lucrative deals by catering to AI-driven decision-making processes at studios and streaming platforms.

Documentary Filmmaking at Risk

The article underscores the vulnerability of documentary filmmaking to AI curation, where decisions based on data shape content exposure. It indicates the potential loss of human curation, transparency, and accountability as algorithms decide what projects to buy and how to create them.

Filmmakers and industry veterans express concerns about AI decision-making authority, potentially leading to risk aversion and a decline in innovative content. The ethical dilemmas surrounding deepfake technology, question the trustworthiness of content and the preservation of nonfiction storytelling’s integrity.


Even as AI demonstrates its value in boosting creativity and decision-making, it has too much authority. There is necessity to uphold human judgment, accountability, and openness in an industry that progressively depends on insights generated by AI.

In summary, there is a pressing need to safeguard the authenticity of nonfiction storytelling, placing a high value on truth and trust. With the ongoing integration of AI into filmmaking, maintaining a robust moral foundation rooted in principles like honesty and respect is essential to establish a balanced and cooperative relationship between technology and storytelling driven by humans.

AI Is Coming for Filmmaking: Here’s How – The Hollywood Reporter

Can Artificial Intelligence Help The Film Industry? It Already Is. (forbes.com)

Tagged ,

Unreal Engine 5 – new era of computer generated images

Reading Time: 4 minutes

Earlier this year Epic Games released new computer graphic engine that revolutionized industry. Two weeks ago new update appeared and now scenes created with Unreal Engine 5.1 are indistinguishable compared to real life footage.

Demo technologiczne Unreal Engine 5 udźwignie mocny laptop - ilustracja #1

When it comes to visual effects in movies, with the right budget and time, filmmakers can make everything look realistic. Movie shots are scripted, the camera moves in a certain direction and nothing will change that. The computer-generated images (CGI) that appear on the screen, which we see when watching a movie, are pre-rendered. In games where the player is responsible for moving the camera, the frames on the screen must be rendered in real time. In order for the human eye to see smoothly we need at least sixty of them for every second of the game. Generating this amount of data in a short period of time is very time consuming, and yet many players can’t afford to admire the beauty of the shots because they have to keep the graphics to a minimum to make the game work on their computers. This is where Unreal Engine 5 comes in.

One of the biggest problems in CGI is lightning. It is very easy to identify bad light in a scene. However, it is very easy to visualize thanks to global illumination. This is the way light bounces off an object and lights up another. This is a difficult process to do, especially in real time. So far, graphic designers have used light baking which pre-renderes lights map. This method works very well, but if there is even a slight change in the positioning of objects, the whole process has to be again done from the scratch. This is very time-consuming. The lighting system in Unreal Engine 5 called Lumen solves this problem. It renders light in real time regarding moving objects. It focuses only on the multi-reflection global illumination of the main light source and. That’s why it looks so realistic. This process improves the workflow and speed. Are very valuable considering short project deadlines.

Lighting was difficult for the developers, but from the player’s perspective, the most challenging part is rendering the map elements. This is all due to particles called polygons. The more detailed an element is, the more of them it assembles. A single CGI location can contain thousands or even millions of them. These are difficult to render on the average home computer, which can cause the game to crash or freeze frames, and the game will not be playable. A simple way to solve this problem is to lower the level of detail – the number of polygons for a scene, but then the game looks very flat and is not pleasing to the eye. A new Unreal Engine 5 option called Nanite dynamically deforms the environment by lowering the total number of polygons on an object. It changes the number of them depending on how far away the object is, the closer the item is to the camera, the more polys it consists of.


These two elements are revolutionary for the CGI industry. The new 5.1 update patches some of the problems that the engine has had so far. Worth mentioning are fixes for global illumination. Transparent objects resonate light, which didn’t work very well in the previous version. Now the reflection of glass is more realistic, as is water. It is no longer milky white, but actually shines through and reflects objects around it depending on the camera position, rather than being just a blurry patch of light.


Currently Epic Games is working on improving Nanite technology. As for now it can only be used for static objects but they are trying to implement it on a moving characters too.

The last thing is MetaHumans. It is a creator of realistic humans, which is similar to creating characters in The Sims series. It is an amazing tool for creating NPCs in games, but also figures in an animated movie. Another phenomenon is the animations and movements of the created character. These are no longer hours of making, but several commands. The computer itself calculates how a process should look like and performs it itself.  This reduces the human work almost to zero. As a result, the time needed for a given project is much shorter – creating a movie will no longer take years as it used to.

Creating a MetaHuman from a Preset

In 2017, Rouge One: A Star Wars Story was the first film to use an on-set game engine – Unreal Engine 4, followed by Disney’s The Mandalorian series and HBO’s Westworld. Now, with the latest version 5.1, we can expect even more collaboration in the movie field in the future. As for the game industry, many developers have announced that they will be working on the Unreal Engine rather than its competitors. We can expect games such as the new Tomb Rider, The Witcher and Redfall. So far, we can experience UE5 in action by playing The Matrix Awakens and the latest season of Fortnite, or by watching a number of short films created by various small developers. One of them is The Eye: Calanthek created by ASC, which I highly recommend watching:


Tagged ,

The Vision of Tomorrow

Reading Time: 3 minutesImage result for mercedes avtr

From the myriad of new technologies and innovations presented at CES, the world’s largest electronics show in Las Vegas, there was one concept that stood out in the spotlight – Mercedes-Benz VISION AVTR. This advanced vision transportation, inspired by James Cameron’s iconic movie, Avatar, is part of the Mercedes-Benz strategic agenda for the next years, which strives for a more sustainable future. The connection to the film results from its essential message: the Mercedes-Benz plan’s line of reasoning perfectly fits with Avatar’s environmental and spiritual themes. During the reveal, James Cameron, the director of Avatar, joint with Mercedes-Benz members, highlighted the importance of sustainability and coexistence of technology and humans not interrupting the development of nature.


This futuristic concept vehicle is said to be entirely eco-friendly. It is electric, carbon-neutral, and supposed to interact with the surrounding world. The exterior design of the car is supposed to blend in with nature. After opening, the doors are designed in a way to imitate the dragonfly, and at the back of the car are located 33 individual hatches that resemble breathing scales on a reptile.

Related image

The futuristic design even extends to the interior part of the vehicle, as there is no steering wheel, and it is driven using biometrics. You are one with the car. This concept of using a driver’s biometrics is adapted straight from the Avatar movie and should imitate the symbiotic relationship between the driver and the vehicle. The comfort and infotainment of electronics in this vehicle are based on the driver’s hand, as by waving gesture, we are able to drive the car.


After getting in, the system of control lights up on your right hand, and by gestures and waving, you can operate this eye-popping vehicle. By touching the element in the center of the car, it wakes up. The lights and element go up and down to imitate breathing and heartbeat. Not only does this car drive straight and backward as a regular car, but it also has an ability to drive sideways. Large odd flamboyant wheels enable this car to move, little resembling crab-walk, perpendicularly. The vehicle went through the real-life test on the roads of Las Vegas and proved itself to be ready to hit the streets to saturate them with the aspirational and eco-friendly idea of the future.



The presence of James Cameron and his collaboration with Mercedes-Benz is not only all about sustainability, but also the longly-awaited sequel of Avatar movie. Whether it is just a costly marketing or a real car that could be seen in some time on the market and roads, we cannot argue that it is a successful fulfillment of the statement “Vision of Tomorrow.”



Tagged , , , ,

How Artificial Intelligence is starting to have a serious effect on our lives?

Reading Time: 4 minutes

Have you seen Minority Report directed by Steven Spielberg? 



For those who haven’t I recommend watching it because the prophecy of this film begins to meet.

Due to the fact that technological process is constantly developing and thanks to that the meaning of Artificial Intelligence in our lives increase, we can definitely be scared about this what is happening around us.


Have you ever been thinking about which is one of the most intimate things in people lives?

It is sexuality orientation. Nowadays many people hide them real sexuality in fear of social indignation for example: sport players, family members, schoolmates. This people have to bother with this inside battle of “coming-out” every single day and now it is going to be worse. Nowadays the AI can guess whether you are gay or straight based of photos of your face. It is the fact not the opinion! Now we can say that machines started to be better “gaydar” than people. The study work from Stanford University – has found algorithm which could distinguish with 81% of accuracy whether you are gay or straight for men and with 74% of accuracy for women.

The algorithm was tested on machine intelligence which had to research of 35 000 facial photos from the one of dating sites and thanks to that had find out the real sexual orientation.

“The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.”   – Sam Levin, The Guardian


Okay what if I am straight?

The authors of study which was published in the Journal of Personality and Social Psychology, Dr. Michal Kosinski and Yilun Wang, claim that this algorithm can also be used as a similar AI system which could be trained to spot others human traits such as IQ or political views. They are also warning us against this AI develop process because it can turn into something that we don’t really want to in our lives.

It is happening now!

Police in the UK are piloting a new project which provides to use AI to determines how someone is likely to commit the crime. Seems familiar? Back to that what I wrote at the beginning of my post, Steven Spielberg (Director) and Philip K. Dick (writer) were right. AI is going to prevent us from committing the crime.


“(…) The system has 1,400 indicators from this data that can help flag someone who may commit a crime, such as how many times someone has committed a crime with assistance as well as how many people in their network have committed crimes. People in the database who are flagged by the system’s algorithm as being prone to violent acts will get a “risk score,” New Scientist reported, which signals their chances of committing a serious crime in the future. (…)

(…) Donnelly told the New Scientist that they don’t plan to arrest anyone before they’ve committed a crime, but that they want to provide to those who the system indicates might need it. He also noted that there have been cuts to police funding recently, so something like NDAS (National Data Analytics Solution) could help streamline and prioritize the process of determining who in their databases most needs intervention. (…)”
– Melanie Ehrenkranz, gizmodo.com

The project now is in its infancy in comparison to how important it can be for the future of the justice system.

To sum up my post, there are billions of facial images of people that are publicly available on social media sites, government databases and also these ones which come from the streets cameras. In my opinion we should try to care more about our privacy in a media and don’t let the governments to have that serious impact on our lives because as we know the systems are like people, they sometimes fail.








author: Michał Żelazo

Tagged , , , , , , , , , , , , , , , , , , , , , ,