Scalable video REST API to convert, resize, optimize, and compress videos for the web.
Firstly, your video must be uploaded or accessible to Bytescale:
Use the Bytescale Dashboard to upload a video manually.
Use the Upload Widget, Bytescale SDKs or Bytescale API to upload a video programmatically.
Use our external storage options to process external videos.
Build a video processing URL:
Get the raw URL for your file:
https://upcdn.io/W142hJk/raw/example.mp4
Replace "raw" with "video":
https://upcdn.io/W142hJk/video/example.mp4
Add querystring parameters to control the output:
https://upcdn.io/W142hJk/video/example.mp4?h=1080
Watch your video by navigating to the URL from step 2.
By default, your video will be encoded to H.264 at 30 FPS using the input video's dimensions.
The default HTTP response will be an HTML webpage with an embedded video player. This is for debug purposes only: developers are expected to override this behavior by specifying an f option when embedding videos into their webpages and apps.
To embed a video in a webpage using Video.js:
<!DOCTYPE html><html><head> <link href="https://unpkg.com/video.js@7/dist/video-js.min.css" rel="stylesheet"> <script src="https://unpkg.com/video.js@7/dist/video.min.js"></script> <style type="text/css"> .video-container { height: 316px; max-width: 600px; } </style></head><body> <div class="video-container"> <video-js class="vjs-fill vjs-big-play-centered" controls preload="auto" poster="https://upcdn.io/W142hJk/image/example.mp4?input=video&f=webp&f2=jpeg"> <p class="vjs-no-js">To view this video please enable JavaScript.</p> </video-js> </div> <script> var vid = document.querySelector('video-js'); var player = videojs(vid, {responsive: true}); player.on('loadedmetadata', function() { // Begin playing from the start of the video. (Required for 'f=hls-h264-rt'.) player.currentTime(player.seekable().start(0)); }); player.src({ src: 'https://upcdn.io/W142hJk/video/example.mp4!f=hls-h264-rt&h=480&h=1080', type: 'application/x-mpegURL' }); </script></body></html>
The f=hls-h264-rt output format is designed to reduce the wait time for your viewers when the given video has not been transcoded before. Like the other output formats, this video format incurs an initial delay while transcoding starts. However, unlike the other formats, once transcoding begins the video will be streamed to viewers during transcoding. As with the other formats, once transcoded, the resulting video will be cached and will not need to be transcoded again.
To create a video thumbnail (also known as a "video poster image"):
Replace /raw/ with /image/ in your video's URL.
Use the Image Processing API's querystring parameters to customize your video's thumbnail.
MP4 AVC/H.264 output (f=mp4-h264) is the cheapest, fastest, and simplest option for video transcoding.
To create an MP4 video file:
Replace /raw/ with /video/ in the video's URL, and then append ?f=mp4-h264 to the URL.
Navigate to the URL (i.e. request the URL using a simple GET request).
Wait for status: "Succeeded" in the JSON response.
The result will contain a URL to the MP4 video:
https://upcdn.io/W142hJk/video/example.mp4?f=mp4-h264
{ "jobUrl": "https://api.bytescale.com/v2/accounts/W142hJk/jobs/ProcessFileJob/01H3211XMV1VH829RV697VE3WM", "jobDocs": "https://www.bytescale.com/docs/job-api/GetJob", "jobId": "01H3211XMV1VH829RV697VE3WM", "jobType": "ProcessFileJob", "accountId": "W142hJk", "created": 1686916626075, "lastUpdated": 1686916669389, "status": "Succeeded", "summary": { "result": { "type": "Artifact", "artifact": "/video.mp4", "artifactUrl": "https://upcdn.io/W142hJk/video/example.mp4!f=mp4-h264&a=/video.mp4" } }}
WebM output (f=webm-vp8 and f=webm-vp9) takes considerably longer to transcode than MP4 and HLS. However, VP9 may offer a better video quality to file size ratio than MP4 and HLS encoded with H.264. Like H.264, VP9 is supported by most browsers.
To create a WebM video file:
Replace /raw/ with /video/ in the video's URL, and then append ?f=webm-vp9 to the URL.
Navigate to the URL (i.e. request the URL using a simple GET request).
Wait for status: "Succeeded" in the JSON response.
The result will contain a URL to the WebM video:
https://upcdn.io/W142hJk/video/example.mp4?f=webm-vp9
{ "jobUrl": "https://api.bytescale.com/v2/accounts/W142hJk/jobs/ProcessFileJob/01H3211XMV1VH829RV697VE3WM", "jobDocs": "https://www.bytescale.com/docs/job-api/GetJob", "jobId": "01H3211XMV1VH829RV697VE3WM", "jobType": "ProcessFileJob", "accountId": "W142hJk", "created": 1686916626075, "lastUpdated": 1686916669389, "status": "Succeeded", "summary": { "result": { "type": "Artifact", "artifact": "/video.webm", "artifactUrl": "https://upcdn.io/W142hJk/video/example.mp4!f=webm-vp9&a=/video.webm" } }}
HLS output (f=hls-h264 and f=hls-h265) reduces bandwidth by providing multiple bitrates (ABR) for devices to switch between. Only H.264 is widely supported by browsers.
To create an HTTP Live Streaming (HLS) video file:
Replace /raw/ with /video/ in the video's URL, and then append ?f=hls-h264 to the URL.
Add parameters from the Video Transcoding API, Video Compression API, or Video Resizing API.
You can create adaptive bitrate (ABR) videos by specifying multiple groups of resolution and/or bitrate parameters. The end-user's video player will automatically switch to the most appropriate variant during playback.
You can specify up to 10 variants per video. Each variant's parameters must be adjacent on the querystring. For example: h=480&q=6&h=1080&q=8 specifies 2 variants, whereas h=480&h=1080&q=6&q=8 specifies 3 variants (which would most likely be a mistake). You can add next=true between groups of parameters to forcefully split them into separate variants.
Navigate to the URL (i.e. request the URL using a simple GET request).
Wait for status: "Succeeded" in the JSON response.
The result will contain a URL to the HTTP Live Streaming (HLS) video:
https://upcdn.io/W142hJk/video/example.mp4?f=hls-h264&h=480&q=6&h=1080&q=8
{ "jobUrl": "https://api.bytescale.com/v2/accounts/W142hJk/jobs/ProcessFileJob/01H3211XMV1VH829RV697VE3WM", "jobDocs": "https://www.bytescale.com/docs/job-api/GetJob", "jobId": "01H3211XMV1VH829RV697VE3WM", "jobType": "ProcessFileJob", "accountId": "W142hJk", "created": 1686916626075, "lastUpdated": 1686916669389, "status": "Succeeded", "summary": { "result": { "type": "Artifact", "artifact": "/video.m3u8", "artifactUrl": "https://upcdn.io/W142hJk/video/example.mp4!f=hls-h264&h=480&q=6&h=1080&q=8&a=/video.m3u8" } }}
Real-time HLS output (f=hls-h264-rt and f=hls-h265-rt) creates the same output video as regular HLS (f=hls-h264 and f=hls-h265) except the video is returned while it's being transcoded. This option is only recommended if video playback is required very shortly after uploading the input video; otherwise, regular HLS is advised for its simpler asynchronous jobs. Only H.264 is widely supported by browsers.
To create an HTTP Live Streaming (HLS) video with real-time transcoding:
Complete the steps from creating an HLS video.
Replace f=hls-h264 with f=hls-h264-rt.
The result will be an M3U8 file that's dynamically updated as new segments finish transcoding:
https://upcdn.io/W142hJk/video/example.mp4?f=hls-h264-rt
#EXTM3U#EXT-X-VERSION:3#EXT-X-INDEPENDENT-SEGMENTS#EXT-X-STREAM-INF:BANDWIDTH=2038521,AVERAGE-BANDWIDTH=2038521,CODECS="avc1.4d4032,mp4a.40.2",RESOLUTION=2048x1080,FRAME-RATE=30.000example.mp4!f=hls-h264-rt&a=/0f/manifest.m3u8
The Video Metadata API allows you to extract the video's duration, resolution, frame rate, and more.
To extract a video's duration and resolution using JavaScript:
<!DOCTYPE html><html><body> <p>Please wait, loading video metadata...</p> <script> async function getVideoDurationAndDimensions() { const response = await fetch("https://upcdn.io/W142hJk/video/example.mp4?f=meta"); const jsonData = await response.json(); const videoTrack = (jsonData.tracks ?? []).find(x => x.type === "Video"); if (videoTrack === undefined) { alert("Cannot find video metadata.") } else { alert([ `Duration (seconds): ${videoTrack.duration}`, `Width (pixels): ${videoTrack.width}`, `Height (pixels): ${videoTrack.height}`, ].join("\n")) } } getVideoDurationAndDimensions().then(() => {}, e => alert(`Error: ${e}`)) </script></body></html>
The Video Processing API can generate video outputs from the following input file types:
File Extension(s) | Video Container | Video Codecs |
---|---|---|
.gif | No Container | GIF 87a, GIF 89a |
.m2v, .mpeg, .mpg | No Container | AVC (H.264), DV/DVCPRO, HEVC (H.265), MPEG-1, MPEG-2 |
.3g2 | 3G2 | AVC (H.264), H.263, MPEG-4 part 2 |
.3gp | 3GP | AVC (H.264), H.263, MPEG-4 part 2 |
.wmv | Advanced Systems Format (ASF) | VC-1 |
.flv | Adobe Flash | AVC (H.264), Flash 9 File, H.263 |
.avi | Audio Video Interleave (AVI) | Uncompressed, Canopus HQ, DivX/Xvid, DV/DVCPRO, MJPEG |
.mxf | Interoperable Master Format (IMF) | Apple ProRes, JPEG 2000 (J2K) |
.mxf | Material Exchange Format (MXF) | Uncompressed, AVC (H.264), AVC Intra 50/100, Apple ProRes (4444, 4444 XQ, 422, 422 HQ, LT, Proxy), DV/DVCPRO, DV25, DV50, DVCPro HD, JPEG 2000 (J2K), MPEG-2, Panasonic P2, SonyXDCam, SonyXDCam MPEG-4 Proxy, VC-3 |
.mkv | Matroska | AVC (H.264), MPEG-2, MPEG-4 part 2, PCM, VC-1 |
.mpg, .mpeg, .m2p, .ps | MPEG Program Streams (MPEG-PS) | MPEG-2 |
.m2t, .ts, .tsv | MPEG Transport Streams (MPEG-TS) | AVC (H.264), HEVC (H.265), MPEG-2, VC-1 |
.dat, .m1v, .mpeg, .mpg, .mpv | MPEG-1 System Streams | MPEG-1, MPEG-2 |
.mp4, .mpeg4 | MPEG-4 | Uncompressed, DivX/Xvid, H.261, H.262, H.263, AVC (H.264), AVC Intra 50/100, HEVC (H.265), JPEG 2000, MPEG-2, MPEG-4 part 2, VC-1 |
.mov, .qt | QuickTime | Uncompressed, Apple ProRes (4444, 4444 XQ, 422, 422 HQ, LT, Proxy), DV/DVCPRO, DivX/Xvid, H.261, H.262, H.263, AVC (H.264), AVC Intra 50/100, HEVC (H.265), JPEG 2000 (J2K), MJPEG, MPEG-2, MPEG-4 part 2, QuickTime Animation (RLE) |
.webm | WebM | VP8, VP9 |
Bytescale supports up to 16384x16384 inputs for most video codecs. AVC (H.264) inputs are limited to 16384x8192.
Some codec profiles are not supported by Bytescale. It is worth noting that AVC (H.264) High 4:4:4 Predictive is currently not supported. We aim to provide a full list of supported profiles in the near future.
Use the Video Metadata API to extract the duration, resolution, FPS, and other information from a video.
Instructions:
Replace raw with video in your video URL.
Append ?f=meta to the URL.
The result will be a JSON payload describing the video's tracks (see below).
Example video metadata JSON response:
{ "tracks": [ { "bitRate": 2500000, "chromaSubsampling": "4:2:0", "codec": "AVC", "codecId": "avc1", "colorSpace": "YUV", "duration": 766.08, "frameCount": 19152, "frameRate": 25, "height": 576, "rotation": 0, "scanType": "Progressive", "type": "Video", "width": 720 }, { "bitRate": 159980, "bitRateMode": "VBR", "channels": 2, "codec": "AAC", "codecId": "mp4a-40-2", "frameCount": 35875, "frameRate": 46.875, "samplingRate": 48000, "title": "Stereo", "type": "Audio" } ]}
Use the Video Transcoding API to encode your videos into a specific format.
Use the f parameter to change the output format of the video:
Format | Transcoding | Compression | Browser Support |
---|---|---|---|
f=webm-vp8 | medium | good | all (except IE11) |
f=webm-vp9 | slow | good | all (except IE11) |
f=mp4-h264 recommended | fast | good | all |
f=mp4-h265 | fast | excellent | limited |
f=hls-h264 | fast | good | all (requires SDK) |
f=hls-h265 | fast | excellent | none |
f=hls-h264-rt | very fast | good | all (requires SDK) |
f=hls-h265-rt | very fast | excellent | none |
Transcodes the video to WebM (VP8 codec).
Caveat: WebM is slower at transcoding than MP4 and HLS.
Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the WebM file on job completion.
Browser support: all browsers (except Internet Explorer 11 and below)
Transcodes the video to WebM (VP9 codec).
Caveat: WebM is slower at transcoding than MP4 and HLS.
Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the WebM file on job completion.
Browser support: all browsers (except Internet Explorer 11 and below)
Transcodes the video to MPEG-4 (H.264/AVC codec).
Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the MP4 file on job completion.
Browser support: all browsers
Transcodes the video to MPEG-4 (H.265/HEVC codec).
Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the MP4 file on job completion.
Browser support: limited (only Chrome and Edge)
Transcodes the video to HLS (H.264/AVC codec).
Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the M3U8 file on job completion.
Browser support: all browsers (requires a video player SDK with HLS support, like Video.js).
Transcodes the video to HLS (H.265/HEVC codec).
Response: JSON for an asynchronous transcode job. The JSON will contain the URL to the M3U8 file on job completion.
Browser support: none
Transcodes the video to HLS (H.264/AVC codec) and returns the video while it's being transcoded.
This output format is designed to reduce the wait time for your viewers when the given video has not been transcoded before. Like the other output formats, this video format incurs an initial delay while transcoding starts. However, unlike the other formats, once transcoding begins the video will be streamed to viewers during transcoding. As with the other formats, once transcoded, the resulting video will be cached and will not need to be transcoded again.
Caveat: This format introduces challenges for some video players and video SDKs due to the use of a live M3U8 playlist during transcoding. As such, we generally recommend using one of the asynchronous formats (which don't end with -rt) for a simpler implementation.
Response: M3U8
Browser support: all browsers (requires a video player SDK with HLS support, like Video.js)
Transcodes the video to HLS (H.265/HEVC codec) and returns the video while it's being transcoded.
This output format is designed to reduce the wait time for your viewers when the given video has not been transcoded before. Like the other output formats, this video format incurs an initial delay while transcoding starts. However, unlike the other formats, once transcoding begins the video will be streamed to viewers during transcoding. As with the other formats, once transcoded, the resulting video will be cached and will not need to be transcoded again.
Caveat: This format introduces challenges for some video players and video SDKs due to the use of a live M3U8 playlist during transcoding. As such, we generally recommend using one of the asynchronous formats (which don't end with -rt) for a simpler implementation.
Response: M3U8
Browser support: none
Returns a webpage with an embedded video player that's configured to play the requested video in H.264.
Useful for sharing links to videos and for previewing/debugging video transformation parameters.
Response: HTML
Browser support: all browsers
This is the default value.
Returns metadata for the video (resolution, dimensions, duration, FPS, etc.).
See the Video Metadata API docs for more information.
Response: JSON (video metadata)
Removes the audio track from the generated video.
Tip: you can set mute=true to hide the Picture in Picture (PiP) button added by FireFox for embedded videos.
Default: false
If this flag is present, the video variant expressed by the adjacent parameters on the querystring (e.g. q=6&rt=true&q=8&rt=auto) will be returned to the user while it's being transcoded only if the transcode rate is faster than the playback rate.
Only supported by f=hls-h264-rt, f=hls-h265-rt and f=html-h264.
This is the default value.
If this flag is present, the video variant expressed by the adjacent parameters on the querystring (e.g. q=6&rt=true&q=8&rt=false) will never be returned to the user while it's being transcoded.
Use this option as a performance optimization (instead of using rt=auto) when you know the variant will always transcode at a slower rate than its playback rate:
•When rt=auto is used, the initial HTTP request for the M3U master manifest will block until the first few segments of each rt=auto and rt=true variants have been transcoded, before returning the initial M3U playlist.
•In general, you want to exclude slow-transcoding HLS variants to reduce this latency.
If none of the HLS variants have rt=true or rt=auto then the fastest variant to transcode will be returned during transcoding.
Only supported by f=hls-h264-rt, f=hls-h265-rt and f=html-h264.
If this flag is present, the video variant expressed by the adjacent parameters on the querystring (e.g. q=6&rt=true&q=8&rt=auto) will always be returned to the user while it's being transcoded.
Only supported by f=hls-h264-rt, f=hls-h265-rt and f=html-h264.
Use the Video Compression API to control the file size of your video.
Sets the video quality.
Only supported by h264 and h265 formats. Also requires brm=qbr (default).
For all other formats (f) and bitrate modes (brm): use bitrate (br) to adjust the video quality.
Supported values:
•11: lowest quality (and lowest file size).
•100: highest quality (and highest file size).
Supported values (deprecated):
•1: lowest quality (and lowest file size).
•10: highest quality (and highest file size).
Please note: support for the deprecated 1 to 10 range will be dropped in the future. Impacted accounts will be notified prior to this change.
Default: 80
Multi-pass encoding (highest quality).
Only supported by h264 and h265 formats.
Professional pricing applies (see video pricing).
Makes the output video track use a quality-defined bitrate (QBR).
The bitrate will be automatically adjusted based on the given quality score (see q).
Recommended for most cases, except where you need control over the resulting file size.
This is the default value for `h264` and `h265` formats.
Makes the output video track use a variable bitrate (VBR).
More complex scenes will use a higher bitrate, whereas less complex scenes will use a lower bitrate.
This is the default value for all other formats.
Makes the output video track use a constant bitrate (CBR).
Sets the output video bitrate (kbps):
•If brm=qbr then br will be interpreted as a maximum bitrate and q will dictate the mean bitrate.
•If brm=vbr then br will be interpreted as a mean bitrate.
•If brm=cbr then br will be interpreted as a constant bitrate.
Accepts any value between 1 and 100000.
Sets the output audio bitrate (kbps).
Supported values for f=mp4-h264, f=hls-h264, f=hls-h264-rt and f=html-h264:
•16
•20
•24
•28
•32
•40
•48
•56
•64
•80
•96
•112
•128
•160
•192
•224
•256
•288
•320
•384
•448
•512
•576
Supported values for f=webm-vp8 and f=webm-vp9:
•32
•40
•48
•56
•64
•72
•80
•88
•96
•104
•112
•120
•128
•136
•144
•152
•160
•168
•176
•184
•192
Default: 96
Removes noise and compression artifacts from the input video.
•-1: automatic noise reduction (default)
•0: disable noise reduction
•1-5: enable noise reduction (1 = lowest, 5 = highest)
Reduces noise and imperceptible signals, enhancing visual quality, while reducing file size.
Most effective on videos with high noise, like those shot in low light.
•-1: automatic optimization
•0: disable optimization (default)
•1-3: enable optimization (1 = lowest, 3 = highest)
Professional pricing applies (see video pricing).
Quantization reduces file size by rounding off less important details in the video.
•-1: automatic quantization (default)
•0: disable quantization
•1-5: enable quantization (1 = lowest, 5 = highest)
Sets the GOP size in frames. This is the interval between IDR-frames (frames with full picture information).
This is an advanced setting.
Manually setting this field can have the following effects:
•Longer GOPs produce smaller file sizes, better quality for static scenes, but slower video seeking and random access.
•Shorter GOPs produce larger file sizes, better quality for dynamic scenes, and faster video seeking and random access.
Bytescale automatically sets this value for you (by default).
This value is in frames.
Sets the number of frames that can be referenced by B-frames and P-frames.
This is an advanced setting.
Manually setting this field can have the following effects:
•More reference frames can increase video compression and quality.
•Fewer reference frames can accelerate encoding and also reduce decoding effort on the user's device.
Supported values:
•-1: automatic (default)
•1-6: manual reference frame count
Sets the number of B-frames between reference frames (P-frames and I-frames).
This is an advanced setting.
Manually setting this field can have the following effects:
•More B-frames can increase video compression and quality.
•Fewer B-frames can accelerate encoding and also reduce decoding effort on the user's device.
Supported values:
•-1: automatic (default)
•0-7: manual B-frame count
Inserts I-frames on scene changes. I-frames contain full frame information, so generally enhance video quality when inserted at scene changes.
This is an advanced setting.
Manually setting this field can have the following effects:
•If true (default) then video quality is improved for most video types, although the video's file size may be larger.
•If false then video quality may improve for certain video types while file size should also be lower.
Default: true
Use the Video Resizing API to resize videos to a different size.
Width to resize the video to.
Width override parameter for portrait videos.
If specified, allows you to use w for landscape videos and wp for portrait videos.
If not specified, then w will be used for all videos.
Height to resize the video to.
Height override parameter for portrait videos.
If specified, allows you to use h for landscape videos and hp for portrait videos.
If not specified, then h will be used for all videos.
Sets the video sharpness to use when resizing the video:
•0 is the softest.
•100 is the sharpest.
Default: 50
Resizes the video to the given dimensions (see: w and h).
The resulting video may be cropped in one dimension to preserve the aspect ratio of the original video.
The cropped edges are determined by the crop parameter.
•Resulting video size: = w x h
•Aspect ratio preserved: yes
•Cropping: yes
Enlarges the video to the given dimensions (see: w and h) but won't shrink videos that already exceed the dimensions.
If enlargement occurs, the video will be enlarged until at least one dimension is equal to the given dimensions, while the other dimension will be ≤ the given dimensions.
•Resulting video size: ≥ w | ≥ h
•Aspect ratio preserved: yes
•Cropping: no
Enlarges the video to the given dimensions (see: w and h) but won't shrink videos that already exceed the dimensions.
If enlargement occurs, the resulting video's dimensions will be ≥ the given dimensions.
•Resulting video size: ≥ w & ≥ h
•Aspect ratio preserved: yes
•Cropping: no
Resizes the video to the given height (see: h).
Width will be automatically set to preserve the aspect ratio of the original video.
•Resulting video size: = h
•Aspect ratio preserved: yes
•Cropping: no
Resizes the video to the given dimensions (see: w and h).
The resulting video may be smaller in one dimension, while the other will match the given dimensions exactly.
•Resulting video size: (≤ w & = h) | (= w & ≤ h)
•Aspect ratio preserved: yes
•Cropping: no
Resizes the video to the given dimensions (see: w and h).
The resulting video may be larger in one dimension, while the other will match the given dimensions exactly.
•Resulting video size: (≥ w & = h) | (= w & ≥ h)
•Aspect ratio preserved: yes
•Cropping: no
Shrinks the video to the given dimensions (see: w and h) but won't enlarge videos that are already below the dimensions.
If shrinking occurs, the resulting video's dimensions will be ≤ the given dimensions.
•Resulting video size: ≤ w & ≤ h
•Aspect ratio preserved: yes
•Cropping: no
Shrinks the video to the given dimensions (see: w and h) but won't enlarge videos that are already below the dimensions.
If shrinking occurs, the video will be shrunk until at least one dimension is equal to the given dimensions, while the other dimension will be ≥ the given dimensions.
•Resulting video size: ≤ w | ≤ h
•Aspect ratio preserved: yes
•Cropping: no
Resizes the video to the given dimensions, stretching to fit if required (see: w and h).
•Resulting video size: = w x h
•Aspect ratio preserved: no
•Cropping: no
Resizes the video to the given width (see: w).
Height will be automatically set to preserve the aspect ratio of the original video.
•Resulting video size: = w
•Aspect ratio preserved: yes
•Cropping: no
Automatically crops to the bottom of the video.
The crop is performed by removing pixels evenly from the left and right of the video, or from the top of the video, but never both.
To use this parameter, you must set fit=crop or leave fit unspecified.
For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.
Automatically crops to the bottom-left corner of the video.
The crop is performed by removing pixels from the top or right of the video, but never both.
To use this parameter, you must set fit=crop or leave fit unspecified.
For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.
Automatically crops to the bottom-right corner of the video.
The crop is performed by removing pixels from the top or left of the video, but never both.
To use this parameter, you must set fit=crop or leave fit unspecified.
For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.
Automatically crops to the center of the video.
The crop is performed by removing pixels evenly from both sides of one axis, while leaving the other axis uncropped.
To use this parameter, you must set fit=crop or leave fit unspecified.
For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.
Automatically crops to the left of the video.
The crop is performed by removing pixels evenly from the top and bottom of the video, or from the right of the video, but never both.
To use this parameter, you must set fit=crop or leave fit unspecified.
For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.
Automatically crops to the right of the video.
The crop is performed by removing pixels evenly from the top and bottom of the video, or from the left of the video, but never both.
To use this parameter, you must set fit=crop or leave fit unspecified.
For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.
Automatically crops to the top of the video.
The crop is performed by removing pixels evenly from the left and right of the video, or from the bottom of the video, but never both.
To use this parameter, you must set fit=crop or leave fit unspecified.
For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.
Automatically crops to the top-left corner of the video.
The crop is performed by removing pixels from the bottom or right of the video, but never both.
To use this parameter, you must set fit=crop or leave fit unspecified.
For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.
Automatically crops to the top-right corner of the video.
The crop is performed by removing pixels from the bottom or left of the video, but never both.
To use this parameter, you must set fit=crop or leave fit unspecified.
For manual cropping: use crop-x, crop-y, crop-w and crop-h instead of crop.
Use the Video Trimming API to remove footage from the start and/or end of a video.
Starts the video at the specified time in the input video, in seconds, and removes all frames before that point.
If s exceeds the length of the video, then an error will be returned.
Supports numbers between 0 - 86399 with up to two decimal places. To provide frame accuracy for video inputs, decimals will be interpreted as frame numbers, not milliseconds.
Ends the video at the specified time in the input video, in seconds, and removes all frames after that point.
If te exceeds the length of the video, then no error will be returned, and the parameter effectively does nothing.
If tm=after-repeat then te specifies the end position for the final clip of the repeated group (as opposed to the end position for the combined sequence of clips).
Supports numbers between 0 - 86399 with up to two decimal places. To provide frame accuracy for video inputs, decimals will be interpreted as frame numbers, not milliseconds.
Applies the trim specified by ts and/or te after the rp parameter is applied.
Applies the trim specified by ts and/or te before the rp parameter is applied.
This is the default value.
Use the Video Concatenation API to append additional videos to the primary video's timeline.
Number of times to play the video.
If this parameter appears after a video parameter, then it will repeat the appended video file only.
If this parameter appears before any video parameters, then it will repeat the primary video file only.
Default: 1
Use the Video Timecode API to add a burnt-in timecode (BITC) to every frame in the video.
Use an 48-point font size for the timecode overlay.
Use an 32-point font size for the timecode overlay.
Use an 16-point font size for the timecode overlay.
Use an 10-point font size for the timecode overlay.
Text prefix to add to the timecode overlay.
Positions the timecode overlay to the bottom-center of the frame.
Positions the timecode overlay to the bottom-left of the frame.
Positions the timecode overlay to the bottom-right of the frame.
Positions the timecode overlay to the center of the frame.
Positions the timecode overlay to the left of the frame.
Positions the timecode overlay to the right of the frame.
Positions the timecode overlay to the top-center of the frame.
Positions the timecode overlay to the top-left of the frame.
Positions the timecode overlay to the top-right of the frame.
Appends the content from another media file (video or audio file) to the output.
You can specify this parameter multiple times to append multiple media files.
If you specify append multiple times, then the media files will be concatenated in the order of the querystring parameters, with the primary input video (specified on the URL's file path) appearing first.
To use: specify the "file path" attribute of another media file as the query parameter's value.
Sets the output audio sample rate (kHz).
Supported values for f=mp4-h264, f=hls-h264, f=hls-h264-rt and f=html-h264:
•8
•12
•16
•22.05
•24
•32
•44.1
•48
•88.2
•96
Supported values for f=webm-vp8 and f=webm-vp9:
•16
•24
•48
Note: the audio sample rate will be automatically adjusted if the provided value is unsupported by the requested audio bitrate for the requested format. For example, if you use H.264 with an audio bitrate of 96kbps, then the audio sample rate will be adjusted to be between 32kHz - 48kHz.
Default: 48
The Video Processing API is available on all Bytescale Plans.
Your processing quota (see pricing) is consumed by the output video's duration multiplied by a "processing multiplier": the codec, resolution, and framerate of your output video determine the "processing multiplier" that will be used.
Videos can be played an unlimited number of times.
Your processing quota will only be deducted once per URL: for the very first request to the URL.
There is a minimum billable duration of 10 seconds per video.
Video billing example:
A 60-second video encoded to H.264 in HD at 30 FPS would consume 205.8 seconds (60 × 3.43) from your monthly processing quota.
If the video is initially played in January 2024, and is then played 100k times for the following 2 years, then you would be billed 205.8 seconds in January 2024 and 0 seconds in all the following months. (This assumes you never clear your permanent cache).
Codec | Resolution | Framerate | Processing Multiplier |
---|---|---|---|
H.264 | SD | 30 | 1.50 |
60 | 2.15 | ||
120 | 2.59 | ||
HD | 30 | 3.43 | |
60 | 4.30 | ||
120 | 5.15 | ||
4K | 30 | 6.86 | |
60 | 8.58 | ||
120 | 10.29 | ||
H.265 | SD | 30 | 5.49 |
60 | 6.86 | ||
120 | 8.23 | ||
HD | 30 | 10.98 | |
60 | 13.72 | ||
120 | 16.46 | ||
4K | 30 | 21.95 | |
60 | 27.43 | ||
120 | 32.92 | ||
VP8 | SD | 30 | 3.09 |
60 | 5.40 | ||
120 | 6.18 | ||
HD | 30 | 6.18 | |
60 | 10.80 | ||
120 | 12.35 | ||
4K | 30 | 12.35 | |
60 | 21.60 | ||
120 | 24.69 | ||
VP9 | SD | 30 | 3.43 |
60 | 6.00 | ||
120 | 6.86 | ||
HD | 30 | 6.86 | |
60 | 12.00 | ||
120 | 13.72 | ||
4K | 30 | 13.72 | |
60 | 24.00 | ||
120 | 27.43 |
Video resolution is measured using the output video's smallest dimension:
Resolution | Min Resolution | Max Resolution |
---|---|---|
SD | 1 | 719 |
HD | 720 | 1080 |
4K | 1081 | 2160 |
Video resolution example:
An ultrawide 1800×710 video would be considered SD as its smallest dimension falls within the range of the SD definition above.
When using f=hls-h264, f=hls-h264-rt or f=html-h264 (which uses f=hls-h264-rt internally) your processing quota will be consumed per HLS variant.
When using f=hls-h264-rt each real-time variant (rt=true or rt=auto) will have an additional 10 seconds added to its billable duration.
The default behavior for HLS outputs is to produce one HLS H.264 variant at 30 FPS using the input video's dimensions.
You can change this behavior using the querystring parameters documented on this page.
HLS pricing example:
Given an input video of 60 seconds and the querystring ?f=hls-h264-rt&q=4&q=6&q=8&rt=false, you would be billed:
3×60 seconds for 3× HLS variants (q=4&q=6&q=8).
2×10 seconds for 2× HLS variants using real-time transcoding.
The first two variants on the querystring (q=4&q=6) do not specify rt parameters, so will default to rt=auto.
Per the pricing above, real-time variants incur an additional 10 seconds of billable duration.
200 seconds total billed duration: 3×60 + 2×10
Bytescale offers several professional video transcoding features that carry an additional charge.
A multiplier of 1.6 is applied to the above price table if any of the following parameters are used:
This website uses cookies. By continuing you are consenting to the use of cookies per our Cookie Policy. Our legal policies were last updated August 16 2024.
This website requires a modern web browser -- the latest versions of these browsers are supported: