# Image to video v2

**Action ID:** `image_to_video_v2`

## Description

Transform static images into dynamic videos using advanced AI video generation. This node supports both single-image and image-to-image interpolation to create smooth, animated video content with customizable parameters.

## Connection

| Name               | Description                                 | Required | Category |
| ------------------ | ------------------------------------------- | -------- | -------- |
| PixelML Connection | The PixelML connection to call PixelML API. | True     | pixelml  |

## Input Parameters

| Name             | Type    | Required | Default | Description                                                                                                       |
| ---------------- | ------- | :------: | ------- | ----------------------------------------------------------------------------------------------------------------- |
| prompt           | string  |     ✓    | -       | Prompt describing the desired video animation                                                                     |
| image            | string  |     ✓    | -       | Start image of the video                                                                                          |
| end\_image       | string  |     -    | -       | End image of the video. If provided, the video will be generated by interpolating between the start and end image |
| negative\_prompt | string  |     -    | -       | Negative prompt to avoid certain elements                                                                         |
| width            | integer |     -    | 512     | Width of the video. Range: 256 to 1284                                                                            |
| height           | integer |     -    | 512     | Height of the video. Range: 256 to 1284                                                                           |
| steps            | integer |     -    | 20      | Number of generation steps. Range: 1 to 50                                                                        |
| frame\_rate      | integer |     -    | 15      | Frame rate of the video. Range: 1 to 30                                                                           |
| duration         | integer |     ✓    | -       | Duration of the video in seconds. Range: 1 to 10                                                                  |
| guidance\_scale  | integer |     -    | 3       | Guidance scale for generation. Range: 1 to 10                                                                     |

<details>

<summary>View JSON Schema</summary>

**Input Schema**

```json
{
  "description": "Image to video v2 node input.",
  "properties": {
    "prompt": {
      "description": "Prompt",
      "title": "Prompt",
      "type": "string"
    },
    "image": {
      "description": "Start image of the video.",
      "title": "Start image",
      "type": "string"
    },
    "end_image": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "End image of the video. If provided, the video will be generated by interpolating between the start and end image.",
      "title": "End image"
    },
    "negative_prompt": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "type": "null"
        }
      ],
      "default": null,
      "description": "Negative prompt",
      "title": "Negative prompt"
    },
    "width": {
      "default": 512,
      "description": "Width",
      "maximum": 1284,
      "minimum": 256,
      "title": "Width",
      "type": "integer"
    },
    "height": {
      "default": 512,
      "description": "Height",
      "maximum": 1284,
      "minimum": 256,
      "title": "Height",
      "type": "integer"
    },
    "steps": {
      "default": 20,
      "description": "Steps",
      "maximum": 50,
      "minimum": 1,
      "title": "Steps",
      "type": "integer"
    },
    "frame_rate": {
      "default": 15,
      "description": "Frame rate",
      "maximum": 30,
      "minimum": 1,
      "title": "Frame rate",
      "type": "integer"
    },
    "duration": {
      "description": "Duration",
      "maximum": 10,
      "minimum": 1,
      "title": "Duration",
      "type": "integer"
    },
    "guidance_scale": {
      "default": 3,
      "description": "Guidance scale",
      "maximum": 10,
      "minimum": 1,
      "title": "Guidance scale",
      "type": "integer"
    }
  },
  "required": [
    "prompt",
    "image",
    "duration"
  ],
  "title": "ImageToVideoV2NodeInput",
  "type": "object"
}
```

</details>

## Output Parameters

| Name  | Type   | Description               |
| ----- | ------ | ------------------------- |
| video | string | URL of the rendered video |

<details>

<summary>View JSON Schema</summary>

**Output Schema**

```json
{
  "description": "Image to video node output.",
  "properties": {
    "video": {
      "description": "Rendered video",
      "title": "Rendered video",
      "type": "string"
    }
  },
  "required": [
    "video"
  ],
  "title": "ImageToVideoV2NodeOutput",
  "type": "object"
}
```

</details>

## How It Works

This node takes one or two images and generates a video by animating the content based on your prompt. If only a start image is provided, the AI generates motion and transitions based on the prompt. If both start and end images are provided, the node interpolates smoothly between them while respecting the prompt instructions. The generation process uses configurable parameters like steps, guidance scale, and frame rate to control the quality and style of the animation.

## Usage Examples

### Example 1: Single Image Animation

**Input:**

```
prompt: "Camera slowly zooms in, gentle wind blowing"
image: "https://example.com/landscape.jpg"
duration: 5
width: 768
height: 512
frame_rate: 24
steps: 25
```

**Output:**

```
video: "https://example.com/animated_landscape.mp4"
```

### Example 2: Image-to-Image Interpolation

**Input:**

```
prompt: "Smooth transition from day to night"
image: "https://example.com/daytime.jpg"
end_image: "https://example.com/nighttime.jpg"
duration: 8
negative_prompt: "blurry, distorted, low quality"
frame_rate: 30
guidance_scale: 5
```

**Output:**

```
video: "https://example.com/day_to_night_transition.mp4"
```

### Example 3: Portrait Animation

**Input:**

```
prompt: "Subject smiles and looks around naturally"
image: "https://example.com/portrait.jpg"
duration: 3
width: 512
height: 768
steps: 30
guidance_scale: 7
negative_prompt: "distorted face, unnatural movements"
```

**Output:**

```
video: "https://example.com/animated_portrait.mp4"
```

## Common Use Cases

* **Product Demonstrations**: Animate product photos to show features and details in motion
* **Social Media Content**: Create engaging video posts from static images for increased engagement
* **Marketing Materials**: Transform brand images into dynamic video advertisements
* **Storytelling**: Generate video sequences by interpolating between key story moments
* **Real Estate Tours**: Create smooth transitions between property photos
* **Portrait Animation**: Bring portrait photographs to life with subtle movements
* **Artistic Expression**: Create artistic video effects from paintings or illustrations

## Error Handling

| Error Type             | Cause                                          | Solution                                                    |
| ---------------------- | ---------------------------------------------- | ----------------------------------------------------------- |
| Invalid Image Format   | Image URL points to unsupported format         | Ensure images are in JPG, PNG, or supported formats         |
| Image Size Mismatch    | Start and end images have different dimensions | Resize images to match dimensions before processing         |
| Dimension Out of Range | Width or height outside 256-1284 range         | Adjust width and height to be within allowed range          |
| Generation Timeout     | Video generation taking too long               | Reduce duration, steps, or resolution for faster processing |
| Invalid Duration       | Duration outside 1-10 seconds range            | Set duration between 1 and 10 seconds                       |
| Connection Error       | PixelML API connection failed                  | Verify PixelML connection credentials are valid             |
| Insufficient Credits   | PixelML account out of credits                 | Check and add credits to your PixelML account               |

## Notes

* **Image Quality**: Higher resolution images (1024x1024 or larger) produce better video quality but take longer to process
* **Frame Rate**: Higher frame rates (24-30fps) create smoother motion but increase processing time and file size
* **Steps**: More steps (30-50) improve quality but significantly increase generation time. Start with 20-25 for most use cases
* **Guidance Scale**: Higher values (7-10) follow the prompt more closely but may reduce creativity. Lower values (3-5) allow more variation
* **End Image**: When using end\_image, ensure it's compatible with the start image for smooth interpolation
* **Negative Prompts**: Use negative prompts to avoid common issues like blur, distortion, or unwanted elements
* **Processing Time**: Video generation can take several minutes depending on duration, resolution, and steps. Plan accordingly in your workflow


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.agenticflow.ai/reference/nodes/image_to_video_v2.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
