In this tutorial, you will use the FLUX Finetuning API to take a bunch of images of Story’s mascot “Ippy” and finetune an AI model to create similar images along with a prompt. Then you will monetize and protect the output IP on Story.
This Tutorial is in TypeScript
Steps 1-3 of this tutorial are based on the FLUX Finetuning Beta Guide, which contains examples for calling their API in Python, however I have rewritten them in TypeScript.
Generative text-to-image models often do not fully capture a creator’s unique vision, and have insufficient knowledge about specific objects, brands or visual styles. With the FLUX Pro Finetuning API, creators can use existing images to finetune an AI to create similar images, along with a prompt.
When an image is created, we will register it as IP on Story in order to grow, monetize, and protect the IP.
In order to create a finetune, we’ll need the input training data!
Create a folder in your project called images. In that folder, add a bunch of images that you want your finetune to train on. Supported formats: JPG, JPEG, PNG, and WebP. Also recommended to use more than 5 images.
Add Text Descriptions (Optional): In the same folder, create text files with descriptions for your images. Text files should share the same name as their corresponding images. Example: if your image is “sample.jpg”, create “sample.txt”
Compress your folder into a ZIP file. It should be named images.zip
In order to generate an image using a similar style as input images, we need to create a finetune. Think of a finetune as an AI that knows all of your input images and can then start producing new ones.
Let’s make a function that calls FLUX’s /v1/finetune API route. Create a flux folder, and inside that folder add a file named requestFinetuning.ts and add the following code:
Official Docs
In order to learn what each of the parameters in the payload are, see the official /v1/finetune API docs here.
Next, create a file named train.ts and call the requestFinetuning function we just made:
Warning: This is expensive!
Creating a new finetune is expensive, ranging from 2−6 at the time of me writing this tutorial. Please review the “FLUX PRO FINETUNE: TRAINING” section on the pricing page.
train.ts
Copy
import { requestFinetuning } from "./flux/requestFinetuning";async function main() { const response = await requestFinetuning("./images.zip", "ippy-finetune"); console.log(response);}main();
Although very cheap, running an inference does cost money, ranging from $0.06-0.07 at the time of me writing this tutorial. Please review the “FLUX PRO FINETUNE: INFERENCE” section on the pricing page.
Now that we have trained a finetune, we will use the model to create images. “Running an inference” simply means using our new model (identified by its finetune_id), which is trained on our images, to create new images.
There are several different inference endpoints we can use, each with their own pricing (found at the bottom of the page). For this tutorial, I’ll be using the /v1/flux-pro-1.1-ultra-finetuned endpoint, which is documented here.
In our flux folder, create a finetuneInference.ts file and add the following code:
Official Docs
In order to learn what each of the parameters in the payload are, see the official /v1/flux-pro-1.1-ultra-finetuned API docs here.
Next, create a file named inference.ts and call the finetuneInference function we just made. The first parameter should be the finetune_id we got from running the script above, and the second parameter is a prompt to generate a new image.
inference.ts
Copy
import { finetuneInference } from "./flux/finetuneInference";// input your finetune_id hereconst FINETUNE_ID = "";// add your prompt hereconst PROMPT = "A picture of Ippy being really happy.";async function main() { const inference = await finetuneInference(FINETUNE_ID, PROMPT); console.log(inference);}main();
As you can see, the status is still pending. We must wait until the generation is ready to view our image. To do this, we will need a function to fetch our new inference to see if its ready and view the details about it.
In our flux folder, create a file named getInference.ts and add the following code:
Official Docs
In order to learn what each of the parameters in the payload are, see the official /v1/get_result API docs here.
Back in our inference.ts file, lets add a loop that continuously fetches the inference until it’s ready. When it’s ready, we will view the new image.
inference.ts
Copy
import { finetuneInference } from "./flux/finetuneInference";import { getInference } from "./flux/getInference";// input your finetune_id hereconst FINETUNE_ID = "";// add your prompt hereconst PROMPT = "A picture of Ippy being really happy.";async function main() { const inference = await finetuneInference(FINETUNE_ID, PROMPT); let inferenceData = await getInference(inference.id); while (inferenceData.status != "Ready") { console.log("Waiting for inference to complete..."); // wait 5 seconds await new Promise((resolve) => setTimeout(resolve, 5000)); // fetch the inference again inferenceData = await getInference(inference.id); } // now the inference data is ready console.log(inferenceData);}main();
Once the loop completed, the final log will look like:
Copy
{ "id": "023a1507-369e-46e0-bd6d-1f3446d7d5f2", "status": "Ready", "result": { "sample": "https://delivery-us1.bfl.ai/results/746585f8d1b341f3a8735ababa563ac1/sample.jpeg?se=2025-01-16T19%3A50%3A11Z&sp=r&sv=2024-11-04&sr=b&rsct=image/jpeg&sig=pPtWnntLqc49hfNnGPgTf4BzS6MZcBgHayrYkKe%2BZIc%3D", "prompt": "A picture of Ippy being really happy." }, "progress": null}
You can paste the sample into your browser and see the final result! Make sure to save this image as it will disappear eventually.
Next we will register this image on Story as an IP Asset in order to monetize and license the IP. Create a story folder and add a utils.ts file. In there, add the following code to set up your Story Config:
Next we will mint an NFT, register it as an IP Asset, set License Terms on the IP, and then set both NFT & IP metadata.
Luckily, we can use the mintAndRegisterIp function to mint an NFT and register it as an IP Asset in the same transaction.
This function needs an SPG NFT Contract to mint from. For simplicity, you can use a public collection we have created for you on Aeneid testnet: 0xc32A8a0FF3beDDDa58393d022aF433e78739FAbc.
Using the public collection we provide for you is fine, but when you do this for real, you should make your own NFT Collection for your IPs. You can do this in 2 ways:
Deploy a contract that implements the ISPGNFT interface, or use the SDK’s createNFTCollection function (shown below) to do it for you. This will give you your own SPG NFT Collection that only you can mint from.
createSpgNftCollection.ts
Copy
import { zeroAddress } from "viem";import { client } from "./utils";async function createSpgNftCollection() { const newCollection = await client.nftClient.createNFTCollection({ name: "Test NFTs", symbol: "TEST", isPublicMinting: false, mintOpen: true, mintFeeRecipient: zeroAddress, contractURI: "", }); console.log( `New SPG NFT collection created at transaction hash ${newCollection.txHash}` ); console.log(`NFT contract address: ${newCollection.spgNftContract}`);}createSpgNftCollection();
Create a custom ERC-721 NFT collection on your own and use the register function - providing an nftContract and tokenId - instead of using the mintAndRegisterIp function. See a working code example here. This is helpful if you already have a custom NFT contract that has your own custom logic, or if your IPs themselves are NFTs.
Now that we have completed our registerIp function, let’s add it to our inference.ts file:
inference.ts
Copy
import { finetuneInference } from "./flux/finetuneInference";import { getInference } from "./flux/getInference";import { registerIp } from "./story/registerIp";const FINETUNE_ID = "";const PROMPT = "A picture of Ippy being really happy.";async function main() { const inference = await finetuneInference(FINETUNE_ID, PROMPT); let inferenceData = await getInference(inference.id); while (inferenceData.status != "Ready") { console.log("Waiting for inference to complete..."); await new Promise((resolve) => setTimeout(resolve, 5000)); inferenceData = await getInference(inference.id); } // now the inference data is ready console.log(inferenceData); // add the function here await registerIp(inferenceData);}main();