General Guidelines
TensorPix Image Generator supports a wide range of tasks such as text-to-image generation, image editing, reference-based generation, and multi-image creation. For optimal image creation results, consider the following best practices when crafting prompts:
Clearly describe the scene using natural language
Use coherent natural language to describe the subject + action + environment. If aesthetics matter, include descriptors of style, color, lighting, or composition.Recommended: A girl in a lavish dress walking under a parasol along a tree-lined path, in the style of a Monet oil painting.
Avoid: Girl, umbrella, tree-lined street, oil painting texture.
Specify the application scenario
If you have a specific use case, explicitly state the image's purpose and type in your prompt.Recommended: Design a logo for a gaming company. The logo features a dog playing with a game controller. The company name "PITBULL" is written on it.
Avoid: An abstract image of a dog holding a controller, and the word PITBULL on it.
Enhance stylistic rendering
If a particular style is needed, use precise style keywords or reference images to achieve better results.Improve text rendering accuracy
Use double quotation marks for the text that needs to appear in the generated image.Recommended: Generate a poster with the title "TensorPix".
Avoid: Generate a poster titled TensorPix.
Prompt guide
Text to Image
Use clear and detailed natural language to describe the scene. For complex images, describe elements thoroughly to control the output precisely.
Image Generator can transform knowledge and reasoning outcomes into high-density visual content, such as formulas, diagrams, and educational illustrations. When generating such images, we recommend that you use precise technical terminology to ensure accurate representation of the concepts. Additionally, clearly specify the desired visualization format, layout, and style.
Example:
β
βInterior view of an open refrigerator: Top shelf: On the left, there is a carton of milk featuring an illustration of three cows of different sizes grazing on a grassland. On the right, there is an egg holder containing eight eggs. Middle shelf: A plate holds leftover roasted chicken with a small red flag stuck into it. Next to it is a transparent container filled with strawberries. The container is decorated with images of a pineapple, strawberries, and oranges. Bottom shelf: The vegetable drawer contains lettuce, carrots, and tomatoes. On the door shelves, there are bottles of ketchup and mayonnaise.
Example:
Create an infographic showing the causes of inflation. Each cause should be presented independently with an icon.
Image to Image
TensorPix Image Generator supports combining text and images to perform image editing and reference-based generation tasks. Visual cues like arrows, bounding boxes, and doodles can help designate specific regions within the image.
Image Editing
TensorPix Image Generator supports image editing operations such as addition, deletion, replacement, and modification through text prompts. It is recommended to use clear and concise language to precisely indicate the target elements for editing and the specific changes required.
When the image content is complex and difficult to describe accurately using text alone, visual indicators such as arrows, bounding boxes, or doodles can be used to specify the editing target and its location.
Reference-Based Generation
TensorPix Image Generator supports extracting key information from reference images, such as character design, artistic style, and product features, to enable tasks like character creation, style transfer, and product design.
When there are specific features that need to be preserved (e.g., character identity, visual style, product design), you can upload a reference image to ensure the generated result aligns with expectations.
When using reference images, clearly describe:
Reference Target: Clearly describe the elements to be extracted and retained from the reference image, such as character design and product material from the reference image.
Generated Scene Description: Provide detailed information about the desired generated content, including scene, layout, and other specifics.
When converting sketches (like wireframes, floor plans, hand-drawn prototypes) into high-fidelity images, we recommend the following guidelines:
Provide a clear original image. If the image contains a text description, indicate "Generate based on the text in the image" in the text prompt.
Clarify the main subject and requirements, such as a high-fidelity UI interface or a modern living room.
Explicitly define key consistencies with the reference, such as ensuring that furniture placement matches the reference image and that the layout follows the prototype.
Multi-Image Input
TensorPix Image Generator supports multi-image inputs for composite editing, such as combination, replacement, or style transfer. When using this feature, it's recommended to clearly specify what to reference or edit from each image.
βFor example, replace the character in Image 2 with the character from Image 1, and generate the result in the style of Image 3.
Multi-Image Output
TensorPix Image Generator supports generating image sequences with consistent character continuity and unified style, making it suitable for storyboarding, comic creation, and set-based design scenarios that require a cohesive visual identity, such as IP product design or emoji pack creation.
When generating multiple images, you can trigger series generation with phrases like "a series", "a set", or by specifying the number of images.