Visual-Intelligence / README.md
reiofa's picture
Update README.md
c015ed1 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: input
      list:
        - name: type
          dtype: string
        - name: content
          dtype: string
    - name: output
      struct:
        - name: veo3
          list: string
        - name: framepack
          list: string
        - name: framepack_seleted_video
          dtype: string
        - name: hunyuan
          list: string
        - name: hunyuan_seleted_video
          dtype: string
        - name: wan2.2-14b
          list: string
        - name: wan2.2-14b_seleted_video
          dtype: string
        - name: wan2.2-5b
          list: string
        - name: wan2.2-5b_seleted_video
          dtype: string
  splits:
    - name: train
      num_bytes: 98746
      num_examples: 99
  download_size: 36034
  dataset_size: 98746
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Visual-Intelligence

πŸ”— Links

πŸ“– Dataset Introduction

Dataset Schema

  • id: Unique sample identifier.
  • input: Ordered list describing the input context.
    • type: Either "image" or "text".
    • content: For "image", a relative path to the first-frame image. For "text", the prompt text.
  • output: Generated candidates and final selections by model.
    • veo3: Relative paths to videos generated by the VEO3 pipeline.
    • framepack: Relative paths to videos generated by FramePack across multiple runs.
    • hunyuan: Relative paths to videos generated by Hunyuan across multiple runs.
    • wan2.2-5b: Relative paths to videos generated by Wan-2.2-5B across multiple runs.
    • wan2.2-14b: Relative paths to videos generated by Wan-2.2-14B across multiple runs.
    • framepack_seleted_video: Selected best video among FramePack candidates.
    • hunyuan_seleted_video: Selected best video among Hunyuan candidates.
    • wan2.2-5b_seleted_video: Selected best video among Wan 2.2 5B candidates.
    • wan2.2-14b_seleted_video: Selected best video among Wan 2.2 14B candidates.

Data Format:

{
  "id": 1,
  "input": [
    { "type": "image", "content": "thumbnails/mp4/keypoint_localization.jpg" },
    { "type": "text",  "content": "Add a bright blue dot at the tip of the branch on which the macaw is sitting. ..." }
  ],
  "output": {
    "veo3": ["videos/mp4/keypoint_localization.mp4"],
    "framepack": [
      "videos/1_framepack_1.mp4",
      "videos/1_framepack_2.mp4"
    ],
    "hunyuan": [
      "videos/1_hunyuan_1.mp4",
      "videos/1_hunyuan_2.mp4"
    ],
    "wan2.2-5b": [
      "videos/1_wan2.2-5b_1.mp4",
      "videos/1_wan2.2-5b_2.mp4"
    ],
    "wan2.2-14b": [
      "videos/1_wan2.2-14b_1.mp4",
      "videos/1_wan2.2-14b_2.mp4"
    ],
    "framepack_seleted_video": "videos/1_framepack_1.mp4",
    "hunyuan_seleted_video": "videos/1_hunyuan_1.mp4",
    "wan2.2-5b_seleted_video": "videos/1_wan2.2-5b_1.mp4",
    "wan2.2-14b_seleted_video": "videos/1_wan2.2-14b_1.mp4"
  }
}

πŸš€ About project

Google' Veo 3 shows extreme promise in visual intelligence, demonstrating strong visual commonsense and reasoning in visual generation. We aim to construct a fully open-source evaluation suite to measure current progress in video generative intelligence across various dimensions among several state-of-the-art proprietary and open-source models.