If you’re new to AWS Portal we recommend starting here. If you’re new to Deadline we recommend starting here.

The AWS Portal Asset Transfer System (Advanced) - Overview

The AWS Portal Asset Transfer System is made up of the following components:

  • the AWS Portal Asset Server,

  • the S3 Backed Cache Central Controller, and

  • the S3-Backed Cache Client

../../_images/aws_portal_asset_transfer_system_overview.png

The diagram above illustrates the components of the AWS Portal Asset Transfer System and the ways in which they interact. The sections that follow provide an overview of the components in more detail.

Note

You don’t need to set up the S3 Backed Cache Central Controller and the S3-Backed Cache Client. These services will automatically be deployed for you when you launch an infrastructure or start an spot fleet.

AWS Portal Asset Server

Note

If you want to know how to configure AWS Portal Asset Server, please refer to Asset Server Options.

AWS Portal Asset Server is a service/daemon (program that runs in the background) that resides on your on-premise network. It serves the following functions:

  • Transfer files between the on-premise network file system and your AWS S3 bucket.

  • Handle requests from the other components of AWS Portal Asset Transfer system, such as file downloads and file uploads.

It uses the credentials provided by you during the installation of AWS Portal Asset Server to create a restricted IAM user. This new user has very limited access to your AWS resources. It can only:

  1. Upload and download objects from the S3 bucket of the AWS Portal Asset Transfer System.

  2. Read logs from the S3 bucket of AWS Portal log stream.

AWS Portal Asset Server receives file requests from other components of the AWS Portal Asset Transfer System. Upon receiving a file request, AWS Portal Asset Server will upload/download the files to/from the S3 bucket as requested. These actions are completed through a secure channel(HTTPS).

AWS Portal Asset Server also pre-caches asset files to the S3 bucket and informs other components about this. When a job is submitted to Deadline with pre-caching enabled, AWS Portal Asset Server will pre-upload the asset files onto the S3 bucket, then when the Workers launched with a spot fleet begin to render, all of the assets will be ready to download from the S3 bucket.

Configuration

If you want to know how to configure AWS Portal Asset Server, please refer to Asset Server Options.

Service/Daemon

AWS Portal Asset Server is run as a service on Windows or a daemon on Linux.

On Windows, you can control the service through the Services snap-in panel. You can open the Services panel by:

  1. Press the key combination WINDOWS + R

  2. Type services.msc

  3. Press RETURN

  4. Right click the name AWS Portal Asset Server to start/stop/restart the service

On Linux, run the following commands to control the service:

service awsportalassetservershellscript start # starts the service
service awsportalassetservershellscript stop # stops the service
service awsportalassetservershellscript restart # restarts the service

Logs

AWS Portal Asset Server maintains logs that can be useful for troubleshooting any issues related to the AWS Portal Asset Transfer System.

On Windows, the log is stored at this location.

C:\ProgramData\Thinkbox\AWS Portal Asset Server\logs\awsportalassetserver_controller.log

On Linux, the log is stored here.

/var/log/Thinkbox/AWS Portal Asset Server/awsportalassetserver_controller.log

The S3 Backed Cache Central Controller

The S3 Backed Cache Central Controller is a daemon that runs on the “Gateway” EC2 instance that gets created within an AWS Portal Infrastructure. The S3 Backed Cache Central Controller communicates with the AWS Portal Asset Server and the S3-Backed Cache Clients running on AWS Portal Spot Fleets to coordinate asset transfers between the on-premise network and the AWS Portal Infrastructure using the S3 bucket.

The S3 Backed Cache Central Controller is responsible for determining when on-premise assets have changed and need to be transferred again. For this purpose, the S3 Backed Cache Central Controller maintains a cache of the asset file meta-data which includes:

  • the file name

  • the file size

  • the file permissions

  • the last access time

  • the last modification time

  • the last change time.

The S3 Backed Cache Central Controller makes requests for meta-data from the AWS Portal Asset Server. When it determines that the file has changed, it makes a request to the AWS Portal Asset Server to upload the asset to the S3 bucket.

IMPORTANT: The AWS Portal Asset Transfer system assumes that on-premise assets will not change during the course of a render job.

Service/Daemon

Note

You need to ssh into the gateway instance in order to check the status of the S3 Backed Cache Central Controller. Please refer to this page on how to ssh into the gateway instance.

To check the status of the S3 Backed Cache Central Controller

sudo service s3backedcache_central_service status

To stop the S3 Backed Cache Central Controller

sudo service s3backedcache_central_service stop

To start the S3 Backed Cache Central Controller

sudo service s3backedcache_central_service start

Logs

You can find its log here. It is also available on CloudWatch.

/var/log/Thinkbox/S3BackedCache/central_controller.log

CloudWatch logs expire after 60 days (default). You can change the expiration by right clicking on a running infrastructure, and selecting ‘Log Retention Policy’.

../../_images/AWSPanelLogRetention.png

The S3-Backed Cache Client

The S3-Backed Cache Client (formerly known as the Worker Controller) is a daemon that runs on the Spot instances that are created by AWS Portal. The S3-Backed Cache Client provides a file system that is mounted on the Spot instances and used by rendering software to access asset files.

Note

The S3-Backed Cache Client also runs on the Gateway instance created on infrastructures launched by AWS Portal. Windows Deadline AMIs prior to version 10.0.26.0 mounted this Samba file share hosted on the Gateway instance. The Gateway instance continues to host a S3-Backed Cache Client for backwards compatibility with these AMIs.

The AMIs that are shown in the Launch New Spot Fleet dialog in version 10.0.26.0 and later host their own S3-Backed Cache Client and do not mount the Samba share.

The S3-Backed Cache Client fetches assets from the S3 bucket using HTTPS and caches the contents to an EBS volume. All filenames are obfuscated by the S3-Backed Cache Client.

It also gets the file metadata from other components. These metadata are stored in a database managed by the S3-Backed Cache Client. The S3-Backed Cache Client would return these metadata when they are requested by rendering software via the mounted file-system provided by the S3-Backed Cache Client. File metadata consists of:

  • filename

  • size of the file

  • permission bits

  • last access time

  • last modification time

  • last change time

An output file would be stored on the EBS volume temporarily as well. Once a job has completed, the S3-Backed Cache Client would decide on whether the output files would get uploaded or not.

If the EBS volume used by the S3-Backed Cache Client does not have enough free space, the S3-Backed Cache Client would attempt to automatically clean up unused files. However, if all the files on the EBS volume are currently being used, a not-enough-space error would be reported.

Configuration

Warning

Be careful when changing the configuration file. Any changes may cause unpredicatable behavior.

The S3-Backed Cache Client’s configuration file can be found at the following platform-specific locations.

Windows:

%PROGRAMFILES%\Thinkbox\S3backedCache\Client\Lib\site-packages\slave\etc\client.yaml.windows

Linux:

/var/lib/Thinkbox/S3BackedCache/Client/client-config-linux.yaml

To make the configuration changes persistent, you should make these changes to an on-demand instance, and then make an AMI snapshot. Otherwise, you will have to ssh into each spot instance, make the changes, and restart the service every time you create a new spot fleet.

Service/Daemon

Note

You need to SSH/RDP into the spot fleet instances in order to check the status of the S3-Backed Cache Client. Please refer to the documentation on connecting to the AWS portal Worker instances.

Windows:

On Windows, you can control the service through the Services snap-in panel. You can open the Services panel by:

  1. Press the key combination WINDOWS + R

  2. Type services.msc

  3. Press RETURN

  4. Right click the name DeadlineS3BackedCacheClient to start/stop/restart the service

Linux:

To check the status of the S3-Backed Cache Client:

sudo service s3backedcache_client_service status

To stop the S3-Backed Cache Client:

sudo service s3backedcache_client_service stop

To start the S3-Backed Cache Client:

sudo service s3backedcache_client_service start

Logs

You can find its log here. It is also available on CloudWatch.

Windows:

%PROGRAMDATA%\Thinkbox\S3backedCache\Client\Logs\slave_controller.log

Linux:

/mnt/dtu/slave_controller.log

CloudWatch logs expire after 60 days (default). You can change the expiration by right clicking on a running infrastructure, and selecting ‘Log Retention Policy’.

../../_images/AWSPanelLogRetention.png