This Tutorial is in TypeScriptSteps 1-3 of this tutorial are based on the FLUX Finetuning Beta Guide, which contains examples for calling their API in Python, however I have rewritten them in TypeScript.
The Explanation
Generative text-to-image models often do not fully capture a creator’s unique vision, and have insufficient knowledge about specific objects, brands or visual styles. With the FLUX Pro Finetuning API, creators can use existing images to finetune an AI to create similar images, along with a prompt. When an image is created, we will register it as IP on Story in order to grow, monetize, and protect the IP.0. Before you Start
There are a few steps you have to complete before you can start the tutorial.- You will need to install Node.js and npm. If you’ve coded before, you likely have these.
- Add your Story Network Testnet wallet’s private key to
.env
file:
.env
- Go to Pinata and create a new API key. Add the JWT to your
.env
file:
.env
- Go to BFL and create a new API key. Add the new key to your
.env
file:
.env
- Add your preferred Story RPC URL to your
.env
file. You can just use the public default one we provide:
.env
- Install the dependencies:
Terminal
1. Compile the Training Data
In order to create a finetune, we’ll need the input training data!- Create a folder in your project called
images
. In that folder, add a bunch of images that you want your finetune to train on. Supported formats: JPG, JPEG, PNG, and WebP. Also recommended to use more than 5 images. - Add Text Descriptions (Optional): In the same folder, create text files with descriptions for your images. Text files should share the same name as their corresponding images. Example: if your image is “sample.jpg”, create “sample.txt”
- Compress your folder into a ZIP file. It should be named
images.zip
2. Create a Finetune
In order to generate an image using a similar style as input images, we need to create a finetune. Think of a finetune as an AI that knows all of your input images and can then start producing new ones. Let’s make a function that calls FLUX’s/v1/finetune
API route. Create a flux
folder, and inside that folder add a file named requestFinetuning.ts
and add the following code:
Official DocsIn order to learn what each of the parameters in the payload are, see the official
/v1/finetune
API docs here.flux/requestFinetuning.ts
train.ts
and call the requestFinetuning
function we just made:
Warning: This is expensive!Creating a new finetune is expensive, ranging from 6 at the time of me writing this tutorial. Please review the “FLUX PRO FINETUNE: TRAINING” section on the pricing page.
train.ts
finetune_id
, and will be used to create images in the following steps.
3. Wait for Finetune
Before we can generate images with our finetuned model, we have to wait for FLUX to finish training! In ourflux
folder, create a file named finetune-progress.ts
and add the following code:
Official DocsIn order to learn what each of the parameters in the payload are, see the official
/v1/get_result
API docs here.flux/finetuneProgress.ts
finetune-progress.ts
and call the finetuneProgress
function we just made:
finetune-progress.ts
4. Run Inference
Warning: This costs money.Although very cheap, running an inference does cost money, ranging from $0.06-0.07 at the time of me writing this tutorial. Please review the “FLUX PRO FINETUNE: INFERENCE” section on the pricing page.
finetune_id
), which is trained on our images, to create new images.
There are several different inference endpoints we can use, each with their own pricing (found at the bottom of the page). For this tutorial, I’ll be using the /v1/flux-pro-1.1-ultra-finetuned
endpoint, which is documented here.
In our flux
folder, create a finetuneInference.ts
file and add the following code:
Official DocsIn order to learn what each of the parameters in the payload are, see the official
/v1/flux-pro-1.1-ultra-finetuned
API docs here.flux/finetineInference.ts
inference.ts
and call the finetuneInference
function we just made. The first parameter should be the finetune_id
we got from running the script above, and the second parameter is a prompt to generate a new image.
inference.ts
flux
folder, create a file named getInference.ts
and add the following code:
Official DocsIn order to learn what each of the parameters in the payload are, see the official
/v1/get_result
API docs here.flux/getInference.ts
inference.ts
file, lets add a loop that continuously fetches the inference until it’s ready. When it’s ready, we will view the new image.
inference.ts
sample
into your browser and see the final result! Make sure to save this image as it will disappear eventually.
5. Set up your Story Config
Next we will register this image on Story as an IP Asset in order to monetize and license the IP. Create astory
folder and add a utils.ts
file. In there, add the following code to set up your Story Config:
Associated docs: TypeScript SDK Setup
story/utils.ts
6. Upload Inference to IPFS
Now that we have made a new inference, we’ll have to store the imagesample
file ourselves on IPFS because the sample is only temporary.
In a new pinata
folder, create a uploadToIpfs.ts
file and create a function to upload our image and get details about it:
pinata/uploadToIpfs.ts
7. Set up your IP Metadata
In yourstory
folder, create a registerIp.ts
file.
View the IPA Metadata Standard and construct the metadata for your IP as shown below:
story/registerIp.ts
8. Set up your NFT Metadata
In theregisterIp.ts
file, configure your NFT Metadata, which follows the OpenSea ERC-721 Standard.
story/registerIp.ts
9. Upload your IP and NFT Metadata to IPFS
In thepinata
folder, create a function to upload your IP & NFT Metadata objects to IPFS:
pinata/uploadToIpfs.ts
story/registerIp.ts
10. Register the NFT as an IP Asset
Next we will mint an NFT, register it as an IP Asset, set License Terms on the IP, and then set both NFT & IP metadata. Luckily, we can use themintAndRegisterIp
function to mint an NFT and register it as an IP Asset in the same transaction.
This function needs an SPG NFT Contract to mint from. For simplicity, you can use a public collection we have created for you on Aeneid testnet: 0xc32A8a0FF3beDDDa58393d022aF433e78739FAbc
.
Creating your own custom ERC-721 collection
Creating your own custom ERC-721 collection
Using the public collection we provide for you is fine, but when you do this for real, you should make your own NFT Collection for your IPs. You can do this in 2 ways:
-
Deploy a contract that implements the ISPGNFT interface, or use the SDK’s createNFTCollection function (shown below) to do it for you. This will give you your own SPG NFT Collection that only you can mint from.
createSpgNftCollection.ts
-
Create a custom ERC-721 NFT collection on your own and use the register function - providing an
nftContract
andtokenId
- instead of using themintAndRegisterIp
function. See a working code example here. This is helpful if you already have a custom NFT contract that has your own custom logic, or if your IPs themselves are NFTs.
Associated Docs:
ipAsset.mintAndRegisterIp
story/registerIp.ts
11. Register our Inference
Now that we have completed ourregisterIp
function, let’s add it to our inference.ts
file:
inference.ts