Once a job is created, you need to know if the job failed, was completed successfully or if it is still processing.
By sending a callback url together with the job, you will be notified instantly when the job finished either with the status failed or completed.
If you want to receive a callback with every job status change (e.g. ready, processing in addition to failed or completed), set the flag notify_status to true.
The information received in your callback script has the same structure as the response in Send a job with a remote file.
Following, you can find a very simple callback example code in PHP.
It writes the content of the callback as a human readable array in a plain text file (setting the job's Id and Status as the filename) inside the script's directory.
This way, it will be easy for you to debug all of the possible status contents by simply setting "notify_status": true in your JSON API call.
<?php
$job = file_get_contents('php://input');
$jobAsArray = json_decode($job, true);
if (empty($jobAsArray['id'])) {
file_put_contents(
'callback_errors.log',
'Invalid callback received. Content: ' . var_export($job, true) . PHP_EOL,
FILE_APPEND
);
exit;
}
$filename = sprintf('%s_%s.txt', $jobAsArray['id'], $jobAsArray['status']['code']);
file_put_contents($filename, print_r($jobAsArray, true));
This is especially interesting if you have an application that uses our API. An application can have different steps where one can upload files or provide URLs, or even a step where one selects conversion options.
Lets assume an application with the following structure:
In this case, a job can be created step by step, defining everything needed in each step.
The 1st step is to create the job and
save the id
or token
.
A job can be created providing the field process
with the value
false
, so the job will not start processing even though it has
the required data.
Following that, the 2nd step is to take the conversion(s) defined by the user and feed them to our API using the
POST /v2/jobs/{id}/conversions
endpoint.
Lastly, for the 3rd step, take the input(s) defined by the user and feed them to our API using the POST/v2/jobs/{id}/input
endpoint.
If you have more than one input to send in the same POST request, you can define them inside an array as in the following example:
PATCH
request to the /jobs/{id}
endpoint by
sending the process
field with the value true
.
callback
parameter with the URL to a
previously set up callback script that will handle the information of
the completed job could also be set here.
To allow our servers to download files directly from the Amazon S3 storage, the following permissions are required:
s3:GetObject
s3:PutObject
s3:PutObjectAcl
Below you can find a list with the fields that are accepted in this input type:
Field | Description | Required | Default |
---|---|---|---|
input.type | Specifies the type of input. For cloud storage always use cloud. | Yes | N/A |
input.source | Tells our servers which cloud storage provider must be contacted, in this case amazons3. | Yes | N/A |
input.parameters.bucket | Determines from which bucket our servers will get the file. | Yes | N/A |
input.parameters.region | Indicates the region configured for your bucket. A list of Amazon S3 region names can be found here. If you don't specify this field and your bucket is configured to use another region than the default, any download will fail. | No | eu-central-1 |
input.parameters.file | Amazon S3 key of the file to download. Usually looks like a normal
file path, e.g. pictures/mountains.jpg .
|
Yes | N/A |
input.credentials.accesskeyid | The Amazon S3 access key ID. | Yes | N/A |
input.credentials.secretaccesskey | The Amazon S3 secret access key. | Yes | N/A |
input.credentials.sessiontoken | Together with secretaccesskey and
accesskeyid , this is used to authenticate using
temporary credentials. For more information on how to generate
temporary credentials please check how to install AWS CLI tool and how to
do a call to AWS STS get-session-token.
|
No | N/A |
Filenames can be specified as described in the section for remote inputs
To allow our servers to save files on a Google Cloud Storage, follow these instructions:
Below you can find a list with the fields accepted in this output target:
Field | Description | Required | Default |
---|---|---|---|
output_target.type | Specifies the type of output target, in this case googlecloud. | Yes | N/A |
output_target.parameters.projectid | The ID of your Google Cloud project. | Yes | N/A |
output_target.parameters.bucket | Determines to which bucket our servers upload the file. | Yes | N/A |
output_target.parameters.file | Complete path to where the file will be uploaded, e.g. folder-inside-bucket/image.jpeg .
|
Yes | N/A |
output_target.credentials.keyfile | Here, specify the contents of your json private key file. You can generate one following these instructions. | Yes | N/A |
Please note that in some circumstances (e.g. already existing filename on the cloud) the upload can be refused. For these reasons, it's highly recommended to upload converted files in new directories.
To allow our servers to save files on a Microsoft Azure Blob Storage, the following parameters to send the request are available:
Field | Description | Required | Default |
---|---|---|---|
output_target.type | Specifies the type of output target, in this case azure. | Yes | N/A |
output_target.parameters.container | The name of the container that will contain the uploaded files. | Yes | N/A |
output_target.parameters.file | Complete path to where the file will be uploaded, e.g. folder-inside-container/image.jpeg .
|
Yes | N/A |
output_target.credentials.accountname | Can be found in the storage account dashboard. It's the name
before the blob.core.windows.net URL.
|
Yes | N/A |
output_target.credentials.accountkey | Can be found in the storage account dashboard under the Access Keys menu entry. | Yes | N/A |
Please note that in some circumstances (e.g. already existing filename on the cloud) the upload can be refused. For these reasons, it's highly recommended to upload converted files in new directories.
To allow our servers to upload files directly to an FTP server, the following parameters to send the request are available:
Field | Description | Required | Default |
---|---|---|---|
output_target.type | Specifies the type of output target, in this case ftp. | Yes | N/A |
output_target.parameters.host | The URL or IP of the FTP server. | Yes | N/A |
output_target.parameters.file | Complete path to where the file will be uploaded, e.g. folder-in-ftp/image.jpeg .
|
Yes | N/A |
output_target.parameters.port | The port used to connect to the FTP server. | No | 21 |
output_target.credentials.username | The username of the FTP server account. | Yes | N/A |
output_target.credentials.password | The password of the FTP server account. | Yes | N/A |
Please note that in some circumstances (e.g. already existing filename on the server) the upload can be refused. For these reasons, it's highly recommended to upload converted files in new directories.
Your API key has constraints that determine the capabilities available to you based on the contract you have in place.
These restrictions typically pertain to computational time usage limits per conversion or maximum input file size, among others.
These constraints can be further customized using the limits object within the job's JSON request.
The following parameters are available for sending the request:
Object | Section | Constraints | Description |
---|---|---|---|
limits | job | max_credit | Set the maximum credit that can be spent by the job. |
max_inputs | Set the maximum number of inputs accepted by the job. | ||
max_process_time | Set the maximum processing time, in seconds, for the job. | ||
input | max_file_size | Set the maximum file size for the job inputs. | |
output | max_downloads | Set how many times a converted file can be downloaded. |
It is important to note that the fail_on_conversion_error boolean value may impact the functionality of the limits.
The example shows a job that is being sent with a single image as the common input and three separate targets,
allowing you to convert the image into different formats using just one job.
If you wish to limit the number of credits used, you can set the max_credit limit.
However, the behavior may vary based on the value of fail_on_conversion_error, as described in the table below:
fail_on_conversion_error | max_credit | time spent | expected behaviour |
---|---|---|---|
true|false | 4 | 3 | Conversion is done and 3 credits are deducted from your account. |
true | 2 | 3 | The time spent is more than the allowed one. Conversion fails and no credits are deducted from your account. |
false | 2 | 3 | The time spent is more than the allowed one, but you choose to not fail the conversion. Conversion partly succeeds and the conversions' outputs that first reached the max allowed of 2 spent credits are present. Just 2 credits are deducted from your account. |
{
"input": [{
"type": "remote",
"source": "https://example-files.online-convert.com/raster%20image/jpg/example_small.jpg"
}],
"conversion": [{
"target": "png"
},{
"target": "bmp"
},{
"target": "gif"
}],
"limits": {
"job": {
"max_credit": 2
}
},
"fail_on_conversion_error": false
}