top of page

​

 WEEK 1

Age of the Image, A New Reality, BBC4, 60 mins. https://learningonscreen.ac.uk/ondemand/index.php/prog/158E7D18?bcast=131384361 (Accessed 16 Oct 2022)

Professional photographers complain about the decline of the art of taking pictures. It sounds a bit paradoxical in an era when digital cameras, and now more and more smartphones, take as many photos as never before in history. However, there is a clearly visible fashion - and perhaps even a more enduring tendency - to create reality instead of representing. Is the world of digital photography and smartphone cameras with applications rightly criticized? After all, thanks to these technologies, various forms and techniques of taking pictures are available to everyone. Take, for example, a panoramic image. He used to be a master of equipment, a great technique for need, a great and great master to make such images. Currently, it is enough to turn on the appropriate mode, application and rotate with the camera in hand at a distance along its axis. When used by the manufacturer, it not only glues the pictures together, but also removes some imperfections from taking pictures "by eye". An example of how the HDR filter works. Example of HDR filter operation "All that does" all or almost everything in the well-liked HDR (High Dynamic Range Image), in which high-quality contrasting combination is created by combining several different photos of the same image. At least one of the shots has a short exposure time for dealing with shaded areas, and one has a short exposure for details. Of course, all this work can be done manually by hand. The photographer, skillfully and quickly manipulating to illuminate and expose, would take several shots, and then a person skilled in Photoshop would put them on himself. But why, if the smartphone is enough for a while?

The theme of photographic truth has been widely discussed from the very beginning of this medium's existence - Manipulating the reality is its inherent element, as the framing itself involves selecting only what the photographer wanted to show. In addition, there are other properties and features of both the camera and the shooting itself. These include, for example, a perspective specific to different types of focal lengths, which makes it possible to change the spatial relations of objects (e.g. they seem to be located a bit closer) or to select frames and place them in completely different contexts (e.g. press photos supplemented with an inadequate caption). The media, in particular, take advantage of these properties to bend the facts frequently.

​

tumblr_92a87a3f0825cadbf830fca8b7f603d6_c6fcf638_1280.jpg

Z. Beksinski (1998)

30af13a51cac937c28d30b84e498213a_L.jpg
manipulacja-66.jpg

 WEEK 2

creative-hard-shadow-photography-42-57e27c4f0897c__700.jpg
creative-hard-shadow-photography-3-57e22f65c2ed4__700.jpg

Fot. Wendy Hope

Gunning, T. (2017) PLENARY SESSION II. Digital Aesthetics. What’s the Point of an Index? or, Faking Photographs. Nordicom Review, Vol.25 (Issue 1-2), pp. 39-49. https://doi.org/10.1515/nor-2017-0268 (Accessed: October 16, 2022). 

​

Choudhry, A. (2021) What is the Photographic Truth? Available at: https://fstoppers.com/fine-art/what-photographic-truth-555292 (Accessed: October 16, 2022).  

This week we have looked into Photographic theory known as 'Indexicality' and truth claim. Truth claim, in known in photography. This term Tom Gunning used to describe belief that traditional photographs illustrate reality. Gunning states that The term Photographic Truth-Claim is a term that he used to describe a popular belief that traditional photography represents reality very accurately.He came to conclusion that a claim of truth consists of both the indexity and the visual precision of the photographs.
Ali Choudry states in his pice "Whats is the Photography Truth that photography as an art confronts truth as a concept. In various other art forms, the truth is mostly irrelevant. We dispute that the picture is real. In fact, in dance, we do not question whether it is real. we are able to distinguish fictional texts from professional literature. We are able to many texts of literature and further connect with the tester who will be introduced to the conclusion of the truth read. Photography is different.
Mainly Mayes' claims are based on an inseparable truth in photography. Photography has more inherent truth than an image that is made of algorithms and coding. Photography when I was invented was never real. is a pure lie. A picture is an image, not a thing. Bayard made us realize that rebuilding the smoke and the show could create the illusion of photos with the copyrighted photos.

 WEEK 3

2884.webp
VM1954W02936-11-MC.jpg

A sepia gelatin silver print of Frances Griffiths taken by Elsie Wright. Photograph: Dominic Winter Auctioneers/PA

Cottingley Fairies fake photos to go under the hammer | Photography | The Guardian

Vivian Maier (1958) Self-portraits

 Self-Portraits | Vivian Maier Photographer

"The advent of the digital image changes cinema’s relationship with physical reality. No longer, the story goes, are we dealing with an image based (as with photography on film) exclusively on a direct record of objects placed in front of the camera, the essential link between the world and its representation thus established." Cassetti states that physical reality changed in cinema that digital image no longer is related. It simply means that mostly of time postproduction will take over it and a lot of changes may be applied. By using technology and software. Nowadays what we observed is huge revolution in technology which constantly changing. 

Composition in the video means the arrangement of elements visible in the frame. A successful composition is one where these elements form a coherent, harmonious whole, thanks to which the film is pleasant to receive and looks professional. Today I'm going to show you the basic rules of composition that will help you start creating more thoughtful and, above all, much more interesting outcome. We must remember that film is entangled in the historical and aesthetic development of the visual arts, which is why we can also analyze the composition of a film image in relation to the composition of paintings, photographs, and even architectural diagrams. Many directors and cinematographers draw inspiration from painting and recreate or reinterpret specific painting compositions in their films

 WEEK 4

Hawker, S. and Elliott, J. (2006) “composite,” in Paperback Oxford English Dictionary. Oxford: Oxford 

University Press. (Accessed: October 26, 2022) 

Trends of Photorealism

What does CGI mean? How does it work?


CGI (computer-generated imagery) is the use of computer graphics to create or co-create images in art, print media, video games, simulators, computer animation and visual effects in films, commercials and television programs.

Images can be dynamic or static, and although they are most often associated with three-dimensional (3D) graphics, CGI can also be used to create 2D images. For the first time this technology was used in the company "Navigator's Escape" (1986).

CGI uses algorithms to generate static or animated images, and we will talk about how this looks in detail later in the article.

​

Photorealism is a term in the field of computer graphics, denoting a way of creating images. Thanks to certain tools, the resulting images and videos look very real, as if they were recorded with a camera.

Photorealism allows you to bring out the shape and color of individual objects. The captured objects created thanks to the use of this technique give the impression of being real. In the past, it would have been impossible to achieve such an effect due to technological barriers. Currently, there is no problem with this, but the effect of realism is not wanted by everyone. 

presentatie_ipad_34_o.jpg

How do CGI images engage with older cinematic techniques?

Hipstamatic and Instagram have changed the perception of photography by giving an image of romanticism, delicate light and family images of app users. This has turned into a different way of influencing the lifestyle of people and special effects. To this day, various companies create newer and newer applications aimed at humiliating or creating illusion. In 1990, companies began to create various types of algorithms based on photography techniques using blur, depth of field etc. Analog simulations that were intended to present the computer-created world as realistically as possible.

Combining photography and CGI gave the beginning of a new dimension and thanks to this the creation of animation became simpler than before.

​

Georges Melies was a genius and virtuoso of cinema. The first to use special effects, affecting a huge part of the solutions used later. He created fabulous, extraordinary stories that delight despite the fact that more than a hundred years have passed since their creation and cinema has moved forward with the use of formal means. Watching Melies' works, you can admire his extraordinary skills in applying the capabilities of the camera to create unreal worlds, as well as the precision he achieved in the art of illusion. However, the impression that the creator of Journey to the Moon created only feerie and magic tricks on the screen is wrong. Melies was a versatile artist who used his prestidigitatorial experience to build interesting storylines. 

​

The noise visible at high sensitivities is the so-called digital grain. This term appeared back in the days of analog photography, in which it meant that a photo consists of more or less visible dots. The higher the sensitivity of the film to light, the larger and more pronounced the dots. Enlarging the grain results in a decrease in the resolution of the image, understood as the ability to record details.

​

One of the advantages of digital over analog photography, at least when it comes to the convenience of work, is the ability to change the sensitivity while taking photos without having to roll up the film roll in half and put on a new one. One picture can be taken with a sensitivity equivalent to ISO 100, another with ISO 400, and another again with ISO 100. The ease of changing the sensitivity provokes the use of ISO values that we did not dream of in the days of the analogue. 

​

Depth of field is a parameter used in optics and photography, in which we determine the range of distance from the plane of the matrix (or film), in which objects observed and recorded by optical devices give the impression of sharpness.

Simulation and photo-realism are key in popular cinema today. CGI involve experimental time where edge between architectural and painted space potentially could be areas of three-dimensional image. Where assets can be applying such as faked grass, earth, buildings, fences etc. 

​

​

​

​

​

​

​

​

​

​

Vignetting, also known as "light drop" (sometimes written" drop in light") is common in optics and photography, which simply means darkening the corners of the image compared to the center. Vignetting is either caused by optics or is intentionally added in post-processing to distract the viewer from the distractions in the corner, towards the center of the image.

​

​

​

​

​

​

​

​

 

 

 

Visual Culture in special effects in films such as Terminator 2: Judgement Day, Jurassic Park and Toy Story in the mid-1990s, were big budget blockbusters. In Cinema postproduction generates spectacular special effects in lightening or character animation. That has generated intense excitement and critical debate. Despite the divided opinion of critics and fans of "Terminator" regarding the continuation of the hit, the film was a success on a large scale. Among others, eight nominations of the American Film Academy in 1992, of which the film won four statuettes (in the categories: make-up, sound, sound editing, special effects).

​

Historically situated imaginary has impact on technology in cinema. In the cinema is presented as "...either symptomatic or a causal factor in, the ‘virtualization’ of the modern world. We will consider the implications of CGI’s shifting of animation from the margins of cinematic culture back to its center and ask what happens to the audiences of digital cinema."

vignetting-caused-by-lens-hood.jpg

Example of Vignetting

f11bfe17c31b0b22af698c4ad66c4255.jpg
1abd8d23028c94211f40aa6580c460ff.jpg

Flueckiger , B. (2015)  'Photorealism, Nostalgia and Style. Material Properties of Film in Digital Effects', North et al. Special effects: new histories/theories/contexts. London: Bloomsbury, pp. 78-98.

 WEEK 5

Types of capture are use in VFX:

​

- The Early stage of film - drawings, 

-3D scanning (Lidar scanner etc.,)

-Motion Capture,

-Photogrammetry (Metashape),

-Technology (Software: Nuke, Houdini, Maya, Zbrush, Blender etc.)

​

What Are Disney's 12 Principles of Animation?

1. Squash and Stretch.

squash-and-stretch.webp

2. Anticipation.

3. Staging

4. Pose to Pose vs. Straight Ahead Animation.

pose-to-pose-1.webp
5. Overlapping Action and Following Action Through.

6. Timing.

7. Arcing
arcing.webp

9. Slow In and Slow Out.

8. Secondary Action

10. Solid Drawing

solid-drawing.webp
11. Exaggeration.

12. Appeal.

A motion capture technique called motion capture, or mocap for short, is a modern solution that allows you to map the actor's movement on a computer screen in real time, which ultimately allows you to create extremely natural-looking computer animations. Motion capture system is a technology commonly used nowadays, without which it would be difficult to make delightful animations, superhero films or some action scenes in traditional films. 

 

Motion capture is a kind of development of the idea of rotoscopy. It was used, for example, by Ralph Bakshi in animations such as "The Lord of the Rings" from 1978 and "American Pop" from 1981, where the actor's movement was filmed, and then used as a basis for creating hand-drawn frame by frame, animated characters.

Their use goes even beyond the film and computer games industry. They certainly facilitate the work of animators who do not have to create animations from scratch. However, the success of the motion capture system technology is due to both advanced technology and the skills of specialists. 

​

Capture has come to be widely used to produce films that aim to create simulations or get closer to the live cinema being played, creating almost photo-realistic models of digital characters. The Polar Express uses motion capture to allow Tom Hanks to play many different characters.

​

The beginning of the twenty-first century is the time of the spread of this technique in the film industry, including such hits as The Lord of the Rings, Pirates of the Caribbean and the slightly later Avatar. This film uses a huge number of cameras located in many places.The directing team had to reinvent the underwater motion capture system because no one had ever used it to record a movie in such an environment before. The director summed up that there were about 100 cameras at different points. Since then, this system has been constantly developing and taking film productions to new levels of realism.

​

Another useful tool for creating photorealistic characters is MetaHuman. It is a creator that changed the appearance of film characters and those created for the needs of video games. MetaHuman allows you to prepare realistic digital characters with a perfectly reproduced face, silhouette and clothing.

​

Allison, T. (2011) 'More than a Man in a Monkey Suit: Andy Serkis, Motion Capture, and Digital Realism', Quarterly Review of Film and Video, Issue 28, P. 325–341.

 WEEK 6

 Reality Capture (LIDAR) and VFX.

Reality Capture 3D:

​

1.Depth- based scanning

​

​

​

​

​

​

​

​

 

 

2. Laser Scanning (LIDAR)

​

​

​

​

​

​

​

​

 

 

 

 

 

3. Photogrammetry

Photogrammetry is a system that deals with the reproduction of shapes, sizes, positions of objects based on photos. Thanks to photogrammetry, the designer using the appropriate software can create a 3D model only on the basis of photos. You do not need an additional device besides the camera. 

Photogrammetry is the art, science and technology of obtaining reliable information about physical objects and the environment through the processes of registration, measurement and interpretation of photographic images.

Many of the maps used today are photogrammetry and photographs from aircraft. This discipline can be used to capture fixed point measurements to estimate measurements of the subject being photographed, even in 3D.

Photogrammetry is basically a set of measurements in the physical world through computational analysis of photographs. This discipline is recognized as a scientific tool and as a field of study. 

As it happens in other areas that also combine elements of the real world with the digital world, photogrammetry can be used to estimate the movement of an organism in 3D (for example, vehicles in motion or biological organisms)..

Photographic images are analyzed, and depending on the purpose that the computer requires, they can use reference points to create a 3D model or use a combination of reference points, motion, physical or biomechanical elements.  After using algorithms to process the collected information, the results can be very accurate.

Photogrammetry accuracy can also be used to verify the accuracy of computer algorithms used to predict the movement of vehicles, as well as algorithms developed from physical simulations or biomechanics.

​

Characteristics of photogrammetry:


As the name suggests, it is a coordinated 3D measurement technique that uses photography as a basic means for its metrology (or measurement).

The basic principle of this discipline is triangulation, or more specifically aerial triangulation. When photographing at least two different locations, it is possible to develop some "lines of view" in each camera towards the points of the object in question. These lines of vision are mathematically captured to create 3D coordinates of points of interest. 

The architect Albrecht Meydenbauer was the first to use the term in 1867; Meydenbauer was the first person to draw up some topographic maps and drawings of older heights.

The use of photogrammetry in topographic mapping has been introduced for many years, but recently this technique has been widely applied in the fields of architecture, industry, engineering, forensic science, hydrography. in medicine and in geology.

Your accurate production of information in 3D is gaining more and more importance in several areas. 

This technique can be divided in different ways; The most common way is to divide it by the position of the camera during photography.

Metric photogrammetry is the process of creating accurate measurements from photographs, while interpretive photogrammetry is the process of recognizing and identifying objects and then assessing them according to their importance through systematic and careful analysis. A distinction is also made between photogrammetric printing or digital photogrammetry. The printed work is done using printed photographic products and generally uses optical and mechanical systems. On the other hand, digital is when the image is in digital form and all manipulations are performed by a computer in a virtual domain. However, all forms of this science are subject to the same basic geometric principles.

​

​

​

OIP.jfif
recfusion-sr300-1-e1532518514746-1500x619.png
R.jfif

LiDAR (Light Detection and Ranging) is a device that emits electromagnetic radiation from the visible light range, commonly called a low-power laser. The scattering of these rays is recorded by the built-in telescope and on this basis detailed three-dimensional maps of the environment are created.

3D technology:

Point cloud – is a set of 3D points that are created as a result of scanning objects with appropriate optical devices. Each point has three XYZ coordinates and a color defined by three components from the RGB palette. The density of their placement affects the number of details appearing on the scan. You can convert a point cloud into a triangle grid or a CAD model.


CAD – is a type of three-dimensional design and modeling using appropriate software and devices supporting it.
Digitization is a 3D technology consisting in processing the image of a real object into a digital form.

Decimation – a procedure that homogeneously or adaptively reduces the number of triangles located in the grid generated from the point cloud.

Reverse engineering – a technique consisting in creating technical documentation of a given object based on the information contained in the 3D scan. It is useful for reproducing parts or subassemblies.

Markers – these are specific markers with which the scanned object is marked, to later automatically assemble one from several of its scans. Of course, only the appropriate software can recognize them.

Dimensioning – is determining the dimensions of a given object using lines, numbers and characters. It is carried out by special programs. It is useful when making technical drawings. What's more, they are used in CAD programs.

Triangulation – without it, 3D technology could not exist. It is the development of a grid of triangles based on a point cloud, each point of which has three coordinates.

Rendering – is the generation of a CAD model based on the "collected" information. It is a graphic form of the model on which you can overlay colors or selected textures.

For years, we have been observing the progress of geoiforming s. It is mainly influenced by the development of hardware and software origin from the acquisition and extraction of data. next to the classic methods (surveying surveys), photo- garmetry and laser scanning. For many years, classical geodetic measurements and the use of dimensional photogrammetric methods based on the development of photos of measurements were the only source of data on the shape of objects or terrain. Data processing required a lot of effort and time, and skills in the orientation and interpretation of images. In the mid-nineties in the twentieth century, analog methods were replaced by photographic technology digital meter. Digital stations equipped with appropriate software did not significantly improve the data processing process. Nume generation- of the Terrain Surface Model (TMP) and orthophotomaps from photographs has proven to be an effective way to acquire terrain data. final decade is a rapid development of scanning techniques. Photogrammetry gained scanning support, reducing the time needed to acquire 3D data and simplifying measurement. Today's 3D data processing is a process much easier than previously performed at photo stations grammar

Vosselman, G & Maas, HG (eds) 2010, Airborne and terrestrial laser scanning. CRC Press, Boca Raton. (Accessed: November 20, 2022). 

Advantages of 3D scanning:
 

-No need for additional devices
-A good way to model large areas
-Can be integrated with 3D scanning to increase the quality of large scans

 WEEK 7

Disadvantages of 3D scanning:
 

-A large amount of designer work required
-The quality of the model depends on the skills of the designer and the quality of the software
-Lack of specific accuracy of the 3D model

Motion capture – how does motion capture work?


A motion capture technique called motion capture, or mocap for short, is a modern solution that allows you to map the actor's movement on a computer screen in real time, which ultimately allows you to create extremely natural-looking computer animations. Their use goes even beyond the film and computer games industry. They certainly facilitate the work of animators who do not have to create animations from scratch. However, the success of the motion capture system technology is due to both advanced technology and the skills of specialists. When CGI effects began to be used in filming, the creators began to see the need to place real actors in them to make these films more convincing. Unfortunately, the actors, against the background of computer-generated effects, often looked unnatural or even comical. The beginning of the twenty-first century is the time of the spread of this technique in the film industry, including such hits as The Lord of the Rings, Pirates of the Caribbean and the slightly later Avatar. This film uses a huge number of cameras located in many places. The directing team had to reinvent the underwater recording system using the motion capture technique, because no one has used it to record a movie in such an environment before. The director summed up that there were about 100 cameras at different points. Since then, this system has been constantly developing and taking film productions to new levels of realism. Undoubtedly, the image capture system is a showcase of modern cinematography. So how are animations created using it? Simply put, this technique involves cameras capturing the movement of special markers placed on the actor's body, so that the computer software is able to convert the captured data into animation.  The cameras record the position of markers many times per second, making the animation extremely smooth. It is also much easier than creating a three-dimensional model from scratch, which must be additionally animated. Using this technique, you can, for example, get a ready-made fight scene played by actors, ready for further processing and sprinkling with computer effects. 

​

There are also several types of motion capture, which are divided into two basic groups: – the most popular – based on optics,

​

The first group is widely used, and its subsequent versions are a development of the basic idea. Thus, the following categories of optical systems are distinguished:

 

– operating on different principles. 

​

Passive markers are one of the simpler, but still extremely ingenious treatments. The actors' costumes are covered with reflective markers (markers), while the cameras simultaneously emit and receive the light reflected from these markers. The cameras have a sensitivity threshold, thanks to which only sufficiently intense reflective light is recorded. On the plan there is a certain reflective reference point against which spatial calculations are made.  Thanks to the synchronization of several or even several dozen cameras at the same time, it is possible to obtain precise determination of the position of each marker at a given moment. The advantage of this method, in addition to the lower cost, is the absence of the need to arm the actor with cables and other electronics.

​

Active markers – they work similarly to passive ones, with the difference that markers do not reflect light, but are themselves its source in the form of LED lamps. Thanks to the possibility of selective illumination of markers, it is easier to map the relationships between them in motion. This method is therefore characterized by greater precision of rendering the movement and a greater range from which actors can be recorded.

 

Active markers with time modulation – they are basically an improvement of the previous type, with a recording frequency of almost 1000 frames per second and with the ability to track animations in real time.

 

-The second group are equally interesting techniques, not based on optics:

 

Inertial – based on wireless reading of gyroscope movement. Not even a camera is often needed to use them! Their advantage is the easy transfer of the system into the field and the ability to use in tight spaces.
Mechanical – they are based on a kind of external skeleton made of plastic or metal attached to the actor's body. In this way, its movement is faithfully reflected, and the advantage is the low price of the system.

 

What are the advantages of a motion capture system?


By far the main advantage of motion capture is the acceleration and reduction of the amount of work that must be devoted to character animation. Thanks to this, you can also get natural-looking movements of animated characters, because these depend solely on the talent of the actor.

​

Is it possible to capture the facial expressions of actors?


Absolutely! The latest motion capture technique called performance capture allows you to accurately convey the facial expressions and the smallest emotions that appear on the faces of the actors! A great example is Avatar, which made a phenomenal impression by using the most MC points.

 

Is motion capture system a laborious technique?
Although it helps in shooting shots, motion capture usually requires the preparation of a special studio, cameras and sufficiently efficient computers. Actors need special training, while animators need to have experience in using such software. This is a complicated and time-consuming process definitely for high-budget film productions.

​

 

Vosselman, G & Maas, HG (eds) 2010, Airborne and terrestrial laser scanning. CRC Press, Boca Raton. (Accessed: November 22, 2022). 

 WEEK 8

green-screen-cover.jpg

Green screen, its alternative solution or its equivalent is blue box, is an image processing technique that uses a large background of a solid color, in this case - green. The person being recorded is set against this background. During the processing process, you can precisely cut out a single-color background and replace it with any image. In this way, the studio can produce footage in which the characters are in a variety of settings. Why is the color green used? First of all, because it contrasts best with human skin. When working with a green screen, not only the green background is very important, but also several other factors, such as proper lighting, which should also be even. In addition, the correct position of the recorded person must be controlled at all times, who cannot stand too far from the background (then a shadow is cast on the background). Incorrect positioning of the character could cause obvious defects during keying (i.e. cutting out the background and inserting the image). Of course, in addition to the person being recorded, there may also be other objects on the green screen background. The history of green screen technology dates back to the 1930s. Already then, the "magic of cinema" was created using first a blue and then a green background. Today, the use of this technique is common. This is how the TV weather forecast is created. When viewers look at weather maps from their TV screens, there is really only a green background behind the presenter. The maps are placed in the computer by the producers. News, TV and game shows are implemented in a similar way. The main problem with this technology, however, was that the camera had to be steady at all times and nothing could cut through the glass. Therefore, a method called "traveling mate" was developed, which was simply a moving background. This effect was achieved by filming the actors against a black background and then increasing the contrast of the shot so much that the silhouettes of the characters became completely white. It was later used as a mask to exclude the image. However, this technique also had its drawbacks. The film "Ben Hur" turned out to be a kind of breakthrough in the 1950s, which, thanks to the painstaking work of its creators, finally brought an effect that established the canons for several decades.

​

​

Virtual production: Green screen replacement (2022) Brompton Technology. Available at: https://www.bromptontech.com/applications/virtual-production/ (Accessed: November 2, 2022). 

​

Downie-Solomon, D. (2020) The Pros and cons of LED screens as Green Screen Replacement, Brompton Technology. Brompton Technology Ltd. Available at: https://www.bromptontech.com/the-pros-and-cons-of-led-screens-as-green-screen-replacement/ (Accessed: November 2, 2022). 

​

Downie-Solomon, D. (2020) The Pros and cons of LED screens as Green Screen Replacement, Brompton Technology. Brompton Technology Ltd. Available at: https://www.bromptontech.com/the-pros-and-cons-of-led-screens-as-green-screen-replacement/ (Accessed: November 2, 2022).

 WEEK 9

Virtual Filmmaking.

Unreal Engine was designed to support the mechanics of first-person shooters, but quickly became a much more versatile tool. Today, it is eagerly chosen by the tycoons of the gaming world: only in the last few months, Tony Hawk's Pro Skater 1+2, Final Fantasy VII Remake, Godfall and Valorant based on its operation saw the light of day. The potential of the flagship product of the American company Epic Games has recently been noticed by representatives of the film industry. Unreal Engine has recently been used by the creators of two high-profile series: The Mandalorian set in the Star Wars universe and Westworld. Post-production specialists created virtual worlds with the help of the tool, and then transferred them to large LED projection screens. Thanks to the appropriate lighting systems that track the movement of the camera around the actors and key objects, it became possible to compose shots smoothly in real time and to record many scenes in a relatively short time.

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

Peter Hyoguchi, an American director, decided to go a step further and exploit the possibilities of Unreal Engine to the maximum. The universe of his latest sci-fi movie, Gods of Mars, was created entirely with the latest version of the engine. The production takes place on Mars at the end of the 21st century. The main character of the plot is a fighter pilot who commands a group of local mercenaries. The man and his companions want to annihilate the leader of a mystical cult that convinced the inhabitants of the mining colony to a mass rebellion. “Our creative team includes programmers and developers, but also thumbnail makers and digital matte artists with a Hollywood pedigree. Thus, we want to ensure that Unreal Engine gives a quality equal to that provided by other tools for creating special effects and animation - emphasizes Hyoguchi in an interview with Variety magazine. “Working with this engine for me, as a director, is very similar to the experience of working on location or on a standard set where we can all see and interact with our surroundings. Using green screens is tedious and inconvenient because you have to imagine everything. I love the freedom and intuitiveness of virtual productions,” he adds. An additional advantage of Unreal Engine, which the American drew attention to, is a significant reduction in the cost of the entire implementation. “Zero crew movement provides incredible cost savings. You do not have to pay for travel, insurance, salaries of people servicing the location. The use of new technology also seems to be an appropriate solution in terms of pandemic. The crew can work with a smaller line-up, the entire film can be shot using one LED screen. The release date of Gods of Mars and its distribution plans in Poland remain unknown. However, the creators whetted the appetite for more by presenting a video documenting the work on the film. In the short clip, the people responsible for the artistic shape of the project and the producers themselves speak.

Real-time technology:

-Unreal Engine is most developed 3D Sortware for real-time rendering,

-by using assets and create the whole environment started to be used for not only games but films,

-used in VFX Companies (ILM, Bluebolt,MPC)

-mushroom tool has been designed to match assets in environment, 

-projecting lights with the projectors, because LCD doesn't do this right, 

-a lot of assets and material has been used from marketplace provided by epic games, 

​

​

​

​

​

Peter Hyoguchi. (2019, December 5). Making GODS OF MARS with Unreal Engine [Video]. YouTube. https://www.youtube.com/watch?v=y2VbB8VkRGA

CCK Philosophy (2019) What did Baudrillard think about The Matrix? 10 September 2019. Available at: https://www.youtube.com/watch?v=bf9J35yzM3E&feature=share (Accessed: 26 November 2020).

Assessment 2 

11.PNG
12.PNG
13.PNG
14.PNG
15.PNG
xsxsxx.PNG
bottom of page