Luma API

TTAPI Luma API, text to video service

Generate Video

POST https://api.ttapi.io/luma/v1/generations


Generate video from text & images. Task results is async

💡

Generally speaking, fast mode will return within 300 seconds, but this also depends on the complexity of the prompt and the official timeliness. relax mode usually cannot guarantee timeliness and generally returns within 1 minute to 30 minutes.

Headers

NameTypeDescription
TT-API-KEY*StringYour API Key in TT API used for request authorization

Request Body

NameTypeRequiredDescription
userPromptStringtrueGenerate video text prompts, for example: a red car driving on a road, for details,
Note: Luma officially supports multiple languages, but after trying, English is still the best
aspectRatioStringfalseVideo ratio
Support: 9:16 3:4 1:1 4:3 16:9 21:9
The default is 9:16
modelNameStringfalseUse Mode
Support ray-v1 ray-v2 ray-v2-flash ray-v3 ray-v3-reasoning
The default is ray-v3
imageUrlStringfalseSet the image link of the starting frames of the video. If this parameter is used, the starting frame of the video will be the image of this image. Please refer to
imageEndUrlStringfalseSet the image link of the ending frames of the video. If this parameter is used, the ending frame of the video will be the image of this image. Please refer to
durationStringfalseVideo Length
Support 5s 10s The default is 5s,
10s long videos are only available in 720p videos
resolutionStringfalseVideo clarity
Support is 720p 1080p The default is 720p
loopBooleanfalseWhether to generate a looping video, which can be understood as the start frame and the end frame are the same. The optional value is false or true. The default value is false
useModeStringfalseGenerate speed mode
Support is relax fast The default is relax.
Different modes use different quotas, detail see
hookUrlStringfalseSend a request to the address for task completion or failed notification. If not set you need to request fetch endpoint to get response

Example Request

import requests
endpoint = "https://api.ttapi.io/luma/v1/generations"
headers = {
    "TT-API-KEY": your_key
}
data = {
    "userPrompt": "a red car driving on a road",
}
response = requests.post(endpoint, headers=headers, json=data)
print(response.status_code)
print(response.json())

Example Response

{
    "status": "SUCCESS",
    "message": "",
    "data": {
        "jobId": "ed1a1b01-7d64-4c8a-acaa-71185d23a2f3"
    }
}

Extend Video

POST https://api.ttapi.io/luma/v1/extend


Generate a new video based on the previous video. Task results is async

Headers

NameTypeDescription
TT-API-KEY*StringYour API Key in TT API used for request authorization

Request Body

NameTypeRequiredDescription
jobIdStringtrueThe jobId of the previous video
userPromptStringtrueGenerate video text prompts, for example: a red car driving on a road, for details,
Note: Luma officially supports multiple languages, but after trying, English is still the best
modelNameStringfalseUse Mode
Support ray-v1 ray-v2 ray-v2-flash ray-v3 ray-v3-reasoning
The default is ray-v3
hookUrlStringfalseSend a request to the address for task completion or failed notification. If not set you need to request fetch endpoint to get response

Example Request

import requests
endpoint = "https://api.ttapi.io/luma/v1/extend"
headers = {
    "TT-API-KEY": your_key
}
data = {
    "jobId": "ed1a1b01-7d64-4c8a-acaa-71185d23a2f3",
    "userPrompt": "a red car driving on a road",
}
response = requests.post(endpoint, headers=headers, json=data)
print(response.status_code)
print(response.json())

Example Response

{
    "status": "SUCCESS",
    "message": "",
    "data": {
        "jobId": "ed1a1b01-7d64-4c8a-acaa-71185d23a2f3"
    }
}

Fetch Video

POST/GET https://api.ttapi.io/luma/v1/fetch


Get the generate video task result query, the returned json data is same with the return in hookUrl

Headers

NameTypeDescription
TT-API-KEY*StringYour API Key in TT API used for request authorization

Request Body / Query param

NameTypeRequiredDescription
jobIdStringtrueed1a1b01-7d64-4c8a-acaa-71185d23a2f3

Example Request

import requests
endpoint = "https://api.ttapi.io/luma/v1/fetch"
headers = {
    "TT-API-KEY": your_key
}
data = {
    "jobId": "ed1a1b01-7d64-4c8a-acaa-71185d23a2f3",
}
response = requests.post(endpoint, headers=headers, json=data)
print(response.status_code)
print(response.json())

Async callback & fetch json

{
    "status": "SUCCESS",
    "message": "success",
    "jobId": "6eb34b44-64c8-4629-a0a1-608737de9583",
    "data": {
        "jobId": "6eb34b44-64c8-4629-a0a1-608737de9583",
        "userPrompt": "Sanrio style illustration of Hello Kitty dressed ",
        "width": "512",
        "height": "512",
        "imageUrl": null,
        "imageEndUrl": null,
        "hookUrl": "https://webhook-test.com/72b9baa490830671b5cd068815788b7e",
        "videoUrl": "https://storage.cdn-luma.com/lit_lite_inference_im2vid_v1.0/11be9408-f349-46df-a43e-a3b57002c1cd/watermarked_video09312742f491e41c19974b52babdba615.mp4",
        "quota": 25
    }