Figma’s new Make Designs feature faced backlash for nearly replicating Apple’s famous iPhone weather app UI. We analyze issues like training data bias and copyright laws around AI tools.
When Figma launched its Make Designs feature to auto-generate mobile app layouts, the AI-powered tool faced criticism for copying the look and feel of Apple’s recognized iOS weather application.
Generative AI Controversy: Figma Pulls Tool After Copying Apple Designs
The excitement around AI-powered design tools creating original artwork with just a text description hit a speed bump this week. Popular design platform Figma unveiled a new feature called Make Designs that leverages generative AI to automatically generate app UI mockups. But soon after launch, users noticed the tool’s suggestions looked almost identical to Apple’s famous iOS weather app.
This controversy around potentially copying existing designs highlights key debates around using AI in creative workflows. In this post, we’ll analyze the Figma plagiarism accusations, the role of training data bias in generative models, ethical considerations of AI-assisted design, and the overall future of AI in the design industry.
The Figma Make Designs Controversy
On February 28th, Figma introduced new AI-powered features to assist designers with brainstorming and ideation. This included a tool called Make Designs that could create custom mobile app layouts after the user entered some simple text prompts.
The promise of quickly whipping up unique UI concepts with just a text description generated huge buzz. But soon accusations started flying that Make Designs wasn’t being creative enough.
Alarmingly Similar to Apple’s Weather App
On March 1st, CEO Andy Allen from Not Boring Software posted on Twitter showing how Make Designs created an app layout design that looked almost identical to the famous iOS weather app made by Apple:
![Figma AI Weather App Clone][]
This immediately raised concerns that Figma’s AI models might have been improperly trained on existing apps without permission. The similarities seemed too close to be a mere coincidence.
Allen warned designers using Make Designs to carefully check any layouts generated for potential legal issues around copying other apps’ designs.
Figma Response and Policies Around Training Data
In response to the growing criticism, Figma Founder & CEO Dylan Field posted a Twitter thread explaining their side of the story.
Field denied that Make Designs was trained specifically on Figma users’ design files or existing mobile apps. He blamed the controversy on problems with the variability and originality of the AI models they relied upon:
“The AI models powering Make Designs are general purpose, not targeted models … The main issue is variability is too low.”
According to Figma CTO Kris Rasmussen, the Make Designs feature taps into popular third-party AI engines like OpenAI’s GPT-4 and Amazon’s Titan Image Generator. He suggested the unknown training data from these vendors resulted in the Apple weather app similarities.
“There was no training as part of this feature or any of our generative features … We are looking into what extent the similarities are a function of the third party models we are using.”
This highlights an ongoing challenge for companies leveraging pre-trained generative AI models like GPT-3 and DALL-E. The training data and policies of commercial AI vendors are often opaque or poorly documented.
For its part, Figma also unveiled formal policies about only using customer design files to train their models with explicit opt-in consent:
“Users have until August 15th to decide if they want to opt-in or out of allowing their content to be used for Figma’s training.”
This incident will likely spur more demand from designers for clarity around AI training data sources and consent requirements. Trust and transparency are still works-in-progress for generative AI right now.
Key Debates Around Ethics of AI Design Tools
The Figma controversy also feeds into several heated debates already swirling around the role of AI in creative workflows like design and art. As adoption of generative models spreads, expect more soul-searching about balancing innovation with ethical norms.
Training Biases and Unwanted Similarities
At the heart of this situation lies real concerns about bias in training data causing AI tools to generate work uncomfortably similar to existing original pieces.
For example, if Apple’s own app layouts, icons and workflows were overrepresented in the datasets used to train tools like GPT-4 or Amazon’s image generator, it risks baking in an inherent bias toward Apple’s distinctive visual style.
Without proper safeguards, the AI algorithm only knows to replicate what already exists rather than flex its own creative muscle. This can undermine the very promise of AI-assisted tools to expand the realm of what’s imaginable.
Addressing unfair biases will require more diversity and variability across training data through tactics like:
- Open-sourcing datasets
- Expanding sources beyond Big Tech companies
- Actively searching for underrepresented design elements
- Seeking smaller datasets focused on uniqueness
Building unbiased AI is an epic challenge but also an opportunity to democratize access to technology for neglected groups.
Legal Quandaries Around Copyright
The implications also spill over into thorny debates about copyright law and protections for AI-generated creations.
If an AI model like Make Designs really did closely replicate Apple’s original iPhone weather app without permission, it raises legal questions around violations of Apple’s intellectual property.
Companies leveraging generative AI need to carefully audit for potential copyright infringement issues in any AI outputs before wide public release. As the famous monkey selfie copyright dispute highlighted, creativity and ownership often become weirdly blurred with unpredictably “creative” AI systems.
Attributing proper credit and ownership for AI-designed works will become more pressing as adoption spreads. Relying on narrow Big Tech training datasets poses extra risks certain styles dominate without fair compensation.
Practical Impacts for Designers
For rank-and-file designers just trying to do their jobs, episodes like Figma’s Make Designs fiasco underscore practical real-world impacts core debates about AI ethics and policies can have:
- Legal liability if AI copies existing work
- Reputational damage for plagarism perceptions
- Wasted time if outputs aren’t original
- Constraints to creativity from biased models
Thankfully, experienced designers have plenty of wisdom around leveraging tools ethically and effectively. Instead of replacing human creativity, responsible use of AI generation focuses on:
- Sparking new directions not considered
- Accelerating early ideation and brainstorming
- Identifying promising concepts faster
- Iterating more fluidly with less repetitive work
AI collaboration works best when humans loop back into acting as “curators” over the novel concepts produced. This allows redirecting as needed rather than blindly accepting all AI suggestions.
The Road Ahead for AI in Design Tools
Despite this early stumble for Figma, the long-term trajectory toward AI transformation of the design industry seems inevitable. But episodes like this should encourage more dialogue within the designer community about standards for ethical, fair and socially-aware development of new AI-powered features.
Key considerations for the future roadmap around generative design tools include:
- Rigorous bias testing for unfair replication
- Allowing user opt-in/opt-out of data usage
- Exploring incentives for dataset open-sourcing
- Better transparency around training sources
- Industry auditing for privacy and data policies
- Rating systems for algorithmic fairness and originality
- Legal support for misuse incidents
Creating checks-and-balances for generative AI via peer oversight offers a path to democratize these powerful technologies for social good. With conscientious collaboration between great designers and great engineers, a bright future awaits where AI unlocks new heights of human creativity instead of just replicating the past.
The story of tools like DALL-E, GPT-3 and AlphaGo illustrate how emergent behavior from complex AI systems can surprise even their own creators. As companies like Figma work closely with commercial providers of generative models, anticipating unexpected quirks will require vigilant attention.
But the quest to build artificial intelligence that harmoniously augments human intelligence has only just begun. With open minds, open hearts and open data, artificial creativity stands ready to help push the boundaries of what’s possible.
The growth of AI design tools faces obstacles around potential copyright violations or replicating existing works too closely. Readers can contribute perspectives on managing generative AI thoughtfully by commenting below.