RIFE_FP32

RIFE_FP32 is an ONNX export of a RIFE frame interpolation model intended for video frame generation and FPS upscaling workflows.

Model Description

This repository provides a float32 ONNX version of a RIFE model for frame interpolation.
It is intended for use in applications that generate intermediate frames between two input video frames in order to increase perceived smoothness or raise output framerate.

File

  • RIFE_fp32.onnx

Intended Use

This model is intended for:

  • video frame interpolation
  • FPS upscaling workflows
  • offline processing pipelines
  • ONNX Runtime based applications

Example use cases include:

  • converting 24 FPS video to higher framerates
  • generating in-between frames for smoother playback
  • integrating frame interpolation into custom desktop tools or video pipelines

Input / Output

The model is expected to take paired frame data prepared by the calling application and output interpolated intermediate frames.

Exact tensor formatting, preprocessing, and batching may depend on the application using the model.

Usage Notes

  • Designed for ONNX Runtime inference
  • Best suited for integration into custom frame interpolation pipelines
  • Performance depends on hardware, ONNX Runtime provider, and input resolution
  • CUDA, DirectML, ROCm, or CPU execution may be used depending on the environment

Limitations

  • Quality may vary on fast motion, occlusion boundaries, transparency, particles, or scene cuts
  • Results depend heavily on preprocessing and postprocessing in the host application
  • This repository provides the model file only, not a full standalone interpolation application

License

This model repository is released under the MIT License.

Acknowledgments

RIFE is widely used for video frame interpolation research and practical workflows.
This repository provides an ONNX-packaged version for deployment and application integration.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support