Back

3ds Max: AI Shaping 3D, Concept, and Visualization Work

AI is making strides that are helpful for 3ds Max users, especially regarding concept and visualization work. Concept artists are responsible for delivering options before committing resources to design and model. This step is essential for saving money. It can take days, weeks, or months to model complex scenes for final quality. Concept artists can remain flexible and deliver several options considerably faster (usually in a few days). This step involves tons of iterations and demands speed. Newly developed AI technologies are helping to produce faster and more varied iterations. We will present a few of them in this article.

First, industries share different requirements for concept artists, so before discussing the tools, let's discuss the process related to visualization and how AI resolves some of those elements and falls short in others.

Generally speaking, for visualization, a team will coordinate with the client to determine the needs of their project, the relative size, and the style. The concept artist's job is to translate those into visual form quickly to continue and ultimately finalize the idea. This step is catered toward a client's expectations and level of involvement. For example, simple sketches on a napkin might suffice for one client. Another client might wish to explore the silhouette or shape of their environment or building and ask for a few pages presented with only that. Mood Boards are prepared with a collection of buildings and environments containing many textures, materials, and lighting/wall/ceiling/floor conditions to determine a client's preference. Once the use and visual preferences are narrowed down, the artist begins quickly preparing concepts based on them. Often, a variety of tools and software are used. An artist might use 3ds Max to establish the camera view, import some assets, then capture that view and paint on top with Photoshop. Matte painting involves taking elements from various images on the internet and blending them in software like Photoshop and is often utilized to incorporate additional elements. Finally, an artist adds their touches. Many of these steps are redundant. Concept artists may never use 3D software to present their concepts, relying entirely on matte painting and traditional art skills. Each job is unique.

AI software is essentially building concept art based on text-based user entry that replaces many of these redundant operations. The AI software will apply the principles it was built on to scan the net, identify a style, find similar buildings and environments, and meld them together. For example, users can type "organic building style similar to [insert artist name] set inside urban environment at sunset, dark blue metal, glass, and concrete."  AI is still learning, and figuring out the perfect entry can be challenging, but results are produced incredibly quickly (milliseconds). Changing a word or two will generate entirely different results. Additionally, AI software that uses text-based entry to produce 3D models has been introduced in the last few months. A user can type "turtle with pink shell and eight legs" and will construct a rudimentary model.

Using AI does present a few added problems. AI software hasn't integrated the ability to analyze the content it puts together to implement depth, reflection, rim lighting, refraction, or shadows. While impressive, the output still lacks an experienced artist's touch (though the time savings likely will not matter to clients who will sacrifice presentation for speed). Also, the AI is extracting content from random locations users aren't aware of, utilizing content from sources they do not own. Using copyrighted content in a public setting can lead to lawsuits. That's also the reason concept art for visualization is often not shared.

However, the time saved is impossible to ignore. So let's dive into some of the software and examples.

First, let's discuss Midjourney. Midjourney has introduced in beta form a short time ago and exploded in popularity. See Figure 1 for some of the content and the text entries used to generate them. Note that the text entries displayed in Figure 1 are partial. In some cases, people have spent eighty hours generating the desired results. The technology is run through discord, and the beta program can be reviewed here: https://www.midjourney.com/home/

Figure 1

Nvidia introduced the world to the 3-Dimensional generation of AI using 2D images, as captured in the screenshot displayed in Figure 2 and located here https://blogs.nvidia.com/blog/2022/09/23/3d-generative-ai-research-virtual-worlds/

Figure 2

DreamFusion demonstrates converting text entries to 3D on their website located here:  https://dreamfusionpaper.github.io/

Figure 3

These few examples show how visualization and 3D work will evolve exponentially in the future and how technology continues to provide more powerful tools in the hands of the capable, and this is just the beginning.

Appears in these Categories

Back