User uploads file to Kinopio
scroll to the bottom to the see the implemented workflow, using imgproxy
Kinopio stores original file
Original File
Video
If file type
Image
convert to MP4
Converted and compressed file
Other (PDF, ZIP etc.)
Audio
Convert to WebP or JPEG
we only need to do img conversions in the case that the file is too large (by some threshold, like file size > 300
kb)
Convert to MP3
do nothing
or if the file is a heic/tiff/wav or some other format that can’t be viewed in browsers
store processed file
urlFile
is there a reason to not always just use webp for images?
urlOriginalFile
url to use in card name sent back to client (compressed version, or original if no compression needed)
S3
uploads original file to S3
Queueing via BullMQ which is backed by Redis
uploads file
Client
queue processing of original file URL
API Server
update client with new URL
Redis
Worker
for converting audio and video
notify server of success
for converting images
Pulls original file from S3 and creates optimized versions like described above in ‘Media Pipeline’
after each user upload,
+ Don’t have to worry about lib issues with Sharp etc.
imgproxy will process and upload the file to s3 , then send the processed url to kinopio-server
- if the server ever gets hosted on cdns, more work/expense to configure imgproxy to also be accessible via cdn
+ may handle huge parallel loads of resizing tasks more efficiently
+ secured against ppl using images as an attack vector
- only for images, although i think this is the 99% case so vids and audio are prob much less of a big deal. Although there are some niches like musicians where auto processing wav to mp3 is a hard requirement
- downside is potentially increased maintainance/debugging/deployment burden
kinopio-server updates the card record and websockets the small url to space
how does imgproxy handle transparent gifs? and animated gifs?
after each user upload
Just tested it. For animated gifs it creates a smaller webp version 1,3 MB -> 126 KB
add url to db queue
+ easier to test, debug

kinopio-server processes the file and uploads it to s3
+ DRY we already have code for uploading to s3
+ saves an http roundtrip
- might end up being a lot of work bc of lib issues
Transparent

+ the same pipeline can be reused for other types of files
- might be slower because can only process one image at a time (but the client is also not dependent on fast img processing. it can take as long as it takes)

before
after
Oh wow didn’t know webp could animate !
before

uploads image
Kinopio Client
S3
after
+ separates the problem of optimizaton/compression/conversion of images from rest of the app
requests image
get original image
Kinopio Client
- downside is that we have to host one more service (imgproxy)
+ does not introduce breaking changes if the solution does not work out
Image Proxy
responds with processed image
converts/compresses image and caches it for later
- additional railway cost through hosting one more service
+ does not require server-side changes
I should’ve mentioned this earlier but a requirement is that the user can still fetch the original image. Is this possible w the proxy?
+ Easier to maintain
- Should configure a CDN to cache images and make it more performant
+ Can quickly try different compression & size settings because we request images & runtime
Coolness
Additional railway services are charged based on use , so it prob won’t cost much
- only for images, although i think this is the 99% case so vids and audio are prob much less of a big deal. Although there are some niches like musicians where auto processing wav to mp3 is a hard requirement
I envisioned the system originally as the card displays a small filesize image but if you click into the card and click the image preview in there then it’ll send you to the url for the original upload
+ Only requires small client-side changes
+ does not require to change how images are uploaded and stored on S3
From my understanding would it always run so it would have the maximum cost, just like the API service
+ Don’t have to worry about lib issues with Sharp etc.
+ may handle huge parallel loads of resizing tasks more efficiently
I think they compute demand based on use of cpus and memory dynamically. So I’m not sure yet if the api will use up it all
You also have the option of spinning up and down the service based on when it gets pinged , if you want to use it like a ‘server less’ service. (In service project settings )
+ secured against ppl using images as an attack vector

We can try that. I am just afraid it’s going to be too slow
imgproxy seems to have a pretty small memory and CPU footprint
Ya it sounds not needed
Promising!