Published June 5th, 2024, Behind Features

AI Magic Box: Drag a box for future

Tim Yang
Tim Yang
Product Manager
Max Zhang
Max Zhang
Co-Founder
Share This Post

Frame it, and let AI return best matches.

As a Product Manager at Motiff, let me guide you through the thought process behind our design of the AI Magic Box.

From Intention to Presentation, from Input to Output

Product design is an integral part of software engineering. Software engineering is about turning our intentions to change the world into a user-centric product. During the design stage, product designers may come up with countless creative ideas and thoughts. They continuously put these ideas into action and aim to express creative thinking through the interface they design.

The functionality of past design tools, including features like "AI Reduplication" and "AI Layout" in Motiff, aimed to "accelerate the execution of actions." We refer to this as transforming from "intent" to "action."

In the past, creating a page in Photoshop was not easy, but Sketch's Artboard sped up this action.

Adjusting a responsive and structured layout wasn’t easy in Sketch either, but Figma's Auto Layout accelerated it. Motiff's AI Layout further speeds up this process.

Reducing a 100-step action to only one step is the ultimate goal of productivity tools, and AI is bound to play a crucial role.

AIGC appears to have introduced new possibilities into the industry landscape, as AI can directly generate content. Particularly within the UI field, there is a natural expectation among designers and developers for AI to craft user interfaces. However, I believe that beyond the critical need for ongoing AI advancements to streamline the process from intent to action, there is also an emerging pathway: from intention to presentation.

Furthermore, if we look at this proposition from an engineering standpoint, its essence is actually from "Input" to "Output".

Art Embraces Randomness, but UI Requires Precision

First, let's look at the Output, the deliverable of the presentation.

Distinctions between UI design and art creation are profound. Art welcomes randomness because it often creates artistic value. UI design is a precise engineering process based on the demands of users and businesses, where the need for precision is inherently driven.

For instance, with image generation tools like Midjourney, there is an expectation that AI will go beyond the intended presentation. If we draw a tree, we don’t need AI to precisely count the number of leaves.

However, in UI design, we need to precisely determine the placement of each element and its features. The output of each feature in its corresponding location is also clear.

How do we achieve "precision" in output? Here are two approaches:

  • Narrow down the range of the output. With a relatively smaller output, AI can control with greater precision.
  • Increase the information in the input. Providing more information to AI can improve their understanding of our intentions.

Extreme Thinking 1: Maximum Input, Minimal Output

In fact, examples of this scenario are everywhere.

For instance, you can draw a rectangle within Motiff. Although this action doesn't seem complicated, you actually input a substantial amount of information. For example, you can specify:

  • The rectangle’s coordinates on the x and y-axis.
  • The width and height of the rectangle.
  • The rectangle’s border styling, such as dotted lines instead of continuous lines.
  • The color of the rectangle and the border. For example, the rectangle could be transparent, while the border could be a gradient color.

How about the output you need? It is a precise rectangular frame. In other words, if you input the above information into AI, it can help you achieve your intention with precision.

Extreme Thinking 2: Minimal Input, Maximum Output

What kind of example is this? Such examples have become quite common recently:

For example, input in one sentence and let AI generate a complete user interface.

Several applications exploring this direction have been introduced, impressing the public with their cool demos. (Apparently, Motiff presents a “cool” demo as well. Who doesn’t like rock and roll?)

But stepping back into reality, it is doubtful that this method will add much value to the actual work. Once you have taken the first step of “describing a sentence”, you will find that one is not enough, leading to a second sentence, a third sentence, and so on, until it becomes a paragraph. By these means, you allow AI to grasp your intentions more thoroughly and accurately.

Ultimately, to generate an interface that aligns closely with your intentions, would you consider inputting the following text?

Breaking the Limitations of Input

We’re sorry if our narrative logic may inadvertently mislead you: if AI is your partner, you might feel compelled to become overly talkative to make it understand your intentions.

If you are discussing a product design plan with a real partner, would you do the same? Would you find yourself unconsciously inputting more words to express your intentions to them? I doubt it.

Instead, you'd likely grab a marker and swiftly sketch out flowcharts or wireframes on a whiteboard to convey your intentions to your partner through both words and visuals. This method—instinctual and intuitive—is the most efficient form of input.

Here, we propose a hypothesis: in the realm of UI design, AIGC should not be limited to text prompts because text is not the most efficient input method. A better approach would integrate both graphics and text information.

The Best Integration of Graphics and Text: PRD?

This is a natural deduction. Since we hypothesize that a mix of graphics and text can strengthen the efficiency of AI in generating user interfaces, we already have a perfect example of this type of input: the Product Requirement Document (PRD). The PRD is filled with rich diagrams and textual explanations to describe the intentions and objectives of product design.

What about importing the PRD and letting AI generate UI?

Indeed, this has been one of our long-term research directions. Another section of Motiff Lab, AI-Generated UI, aims to achieve this ultimate goal. Our focus extends beyond simply refining how AI interprets the PRD. We also aim to ensure that AI can apply existing design systems comprehensively in UI creation. You can read "[xxxxx]" to discover more stories related to this blog.

Back to reality, we still need to face challenges before accomplishing this “one-step” goal, but it inspires us on a different dimension.

A New Interaction: Meeting in a Box

As previously mentioned, we have explored two key issues:

  1. 1.Inputs can be more than text—it can be visual sketches.
  2. 2.Producing a complete interface within one output is challenging for AI. Can we take a step back and generate a partial interface instead?

Moving forward in one direction and retreating in another can seemingly create a balance. Taking a step forward in one area while taking one back in another can seemingly create a balance. These movements of progression and regression may meet at a certain point, potentially offering a new solution in the form of a “box”.

Why a "box"?

Because a "box" seems to strike the right balance between efficient input and technically achievable output.

If users draw a box in Motiff, the input can at least include:

  • The relative size of the box.
  • Its position of the box and the context information around it.
  • Supplemental text inputs after drawing the box.

As for the output, we can focus on exactly what information we want to convey in this box, thereby generating user-needed content in a more controllable way.

Try AI Magic Box

At present, if you have an account in Motiff, you can directly try AI Magic Box in the Playground after registration, and experience this new way of interaction between designers and AI.

Currently, we still define this feature as "lab stage". We believe that in the near future, it can come out of the lab and truly provide value for designers.

We also welcome you to provide us with suggestions for this feature.

Subscribe to Motiff Blog
I agree to opt-in to Motiff's mailing list.
By clicking "Subscribe" you agree to our TOS and Privacy Policy.