With just a few words, stunning visual worlds can be conjured from the ether. Thanks to an emerging and rapidly evolving genre of artificial intelligence known as text-to-image generation, tools such as DALL-E 2 and Midjourney have opened the doors to a cornucopia of visual creations. With neural networks trained on billions of images and their textual descriptions, these tools can take simple sentences or mixtures of words and offer their interpretation in the form of highly detailed and stunningly beautiful visuals, in seconds.
Among those testing these tools in the field are architects and designers. Some are beginning to use them as powerful new avenues to visualize early-stage concepts, test on-going project approaches, and even give their non-designer clients a way to take a more active role in the design process.
“It’s almost like you’re talking about a building in existence,” says Paul Howard Harrison, head of IT design at global architecture firm HDR, which has more than 200 offices around the world.
It may seem like a troubling development for an industry based on creating buildings with teams of highly trained humans, but Harrison says artificial intelligence will likely augment the work of architects.
It’s already starting to happen. He has explored the use of AI and specially trained neural networks as design tools over the past few years, applying them to some of HDR’s projects, such as a new hospital project. For another project, a heritage building in Ontario, the company trained a neural network on every listed heritage building in the city and unleashed the algorithm to generate new designs that fit into the local historic style.
Midjourney and DALL-E 2 broaden the applicability of this approach, Harrison says, and he’s used the tools to quickly generate ideas for real-world projects the company is working on, including a mixed-use community center in Toronto. He also explored more playful concepts, including housing designed in the style of children’s book illustrator Maurice Sendak. Another visualization created buildings inspired by renowned architects Robert Venturi and Denise Scott Brown, with the word “duck” on the facade – a reference to their influential book “Learning From Las Vegas”.
Foster + Partners, the UK’s largest architecture firm, also explored artificial intelligence and machine learning tools during the early conceptual design phase of the projects. “In addition to using reference images or studying online precedents, we can now use these tools to quickly illustrate an idea or feeling we want a particular space to evoke,” a spokesperson said. from the firm to Fast Company. “This aspect of machine learning is shaping up to be a useful tool for inspiration.”
Some of the results are clearly more realistic than others, which could be concerning. Harrison says the wide accessibility of these AI tools and the detail of the images they produce could create outlandish expectations about what can actually be transformed from a concept into a building.
“The danger is that we’re now suggesting a photorealistic level of resolution,” Harrison says. “It succeeds in establishing the aesthetic of a building, but it’s really only a small part of what we do as architects.”
Take ideas beyond traditional designs
Architect Andrew Kudless has generated thousands of images with these new tools in recent weeks. A solo practitioner who also teaches at the University of Houston, he says these tools are a sophisticated version of the old-school sketchbook, where vague ideas and building concepts can take rough form. “The great thing about Midjourney and other text-to-image tools is that they serve a real purpose at the start of a design project when you’re conceptualizing and dreaming up what a project could be,” says Kudless.
These tools could also speed up the process of turning those rough ideas and napkin sketches into workable designs. “One of the difficulties we often encounter in architecture is that it’s actually quite difficult to create images that capture the mood or atmosphere of a project without a huge amount of work to render them,” says -he. Like many architects, he wasted hours working on tiny details in computer-aided design programs, then more hours waiting for images to be rendered in fine detail, only to have to go back and revise or rework them again and again. “We don’t get paid enough to spend the number of hours we actually spend on projects,” he says. “We need tools to help us automate the heavy lifting of a lot of the work we do.”
Kudless uses Midjourney to visualize the use of fabric in architectural designs, such as curtains and blinds that integrate with house facades. It’s a concept he’s been interested in since the late 1990s, but struggles to accurately simulate fabrics with the computer design tools that have become commonplace in the architectural industry.
“It was beyond my expertise and something that was very difficult to manage. I just didn’t have time to go there,” he says. “Suddenly in the last six weeks of using these AI tools, the AI also doesn’t know how to simulate the fabric, but has looked at millions of images of dresses and tablecloths draped over a table. He knows what it looks like without knowing its physics.
The images he created through Midjourney show imaginary buildings appearing covered in soft pastel curtains, like inside a kaleidoscope. “And he can do it in seconds,” adds Kudless. “I can explore some of these things in a way that I’ve never been able to with traditional designs.”
Houses you only see in dreams
This facility also creates opportunities for non-architects. Harrison says AI can help clients and the public more easily show architects what they think a project can or should look like. They’re already there, Harrison says, but the tools they have are often rudimentary. “I’ve seen everything from Excel spreadsheets that have been colored to be floor plans to sketches, and more often now we get things like Pinterest boards,” he says. “There is a constant desire on the client side to express what they see or what they would like to see.”
In a way, it might also open up more opportunities for the architect. “I’ve had more people contact me about a job in the last month than I probably had in all of last year. People have so many ideas of houses they want to do or art installations, or music festival pavilions,” says Kudless. He imagines clients coming to him with all sorts of interesting concept images, whether based on places they’ve visited or houses they’ve only seen in their dreams. “It’s been a bit overwhelming for me, but you also see how inspired these potential customers are.”
There are limitations to these tools, which their creators acknowledge. AI based on a set of images, even billions of them, is ultimately limited by what’s in those images. This can result in AI systems that, for example, underrepresent women and people of color. In architecture, the bias is less discriminatory but still impacting. Harrison notes that the history of architectural photography has largely favored symmetrical views of buildings and rooms, and spaces devoid of people. These factors effectively reduce the pool of architectural images they can produce. Harrison says these limitations underscore the argument that architects are taking artificial intelligence into their own hands and developing their own neural networks based specifically on the types of projects or buildings they are working on at any given time.
Harrison and Kudless both see these tools as part of the evolution of the architectural profession, not a replacement for it. They join a long line of technological advancements that have affected the way architects work, such as the evolution from hand drawing to computer-aided design.
“All of these things have changed the profession, but I’m not really worried about people’s jobs,” Kudless says. “There will be jobs coming out of it that we’re not even aware of.”