We ran into this error at my job at Osmos. We upload files to GCS using their JSON-based REST API. Everything was working just fine until we tried uploading a large-ish file of ~2.5GB.
We upload to this API route: https://storage.googleapis.com/upload/storage/v1/b/bucket-name/o?name=file_name.csv&uploadType=media
The Problem When we tried to upload the data, we got a HTML page as a response with a 400 error code and this unhelpful error message:
400. That's an error.
The Problem I recently updated the packages on my Debian Linux install with sudo apt upgrade.
After that, I rebooted and tried to launch League of Legends through Lutris as I have hundreds of times. The client failed to launch with this error printed in the logs:
DRI driver not from this Mesa build ('23.1.0-devel' vs '23.1.2-1') The Cause I recently installed AMD ROCm using amdgpu-install so I could do some machine learning with my AMD 7900 XTX GPU.
I’ve been experimenting with OpenCL via pyopencl recently. They provide a nice interface for enumerating available devices and getting information about them, and then using them to run OpenCL code:
>>> import pyopencl as cl >>> platform = cl.get_platforms()[0] >>> platform.get_devices() [ <pyopencl.Device 'gfx1100' on 'AMD Accelerated Parallel Processing' at 0x56353125bd70>, <pyopencl.Device 'gfx1036' on 'AMD Accelerated Parallel Processing' at 0x5635312ec670> ] >>> There are two devices for me because I have one discrete 7900 XTX GPU as well as an integrated GPU on my 7950X CPU.
I recently upgraded to a 7900 XTX GPU. The upgrade itself went quite smoothly from both a hardware and software perspective. Games worked great out of the box with no driver or other configuration needed - as plug and play as it could possibly get.
However, I wanted to try out some machine learning on it. I’d been using TensorFlow.JS to train models using my GPU all in the browser, but that approach is limited compared to what’s possible when running it natively.
I recently upgraded to the 7900 XTX GPU which was a totally issue-free experience. Then today, I tried to install AMD ROCm so I could try out AMD’s TensorFlow fork that works with AMD GPUs.
I ran into a lot of issues with this that resulted in my computer not being able to boot for a while. I eventually figured it out, but it was quite a struggle.
It started after I downloaded and ran amdgpu-install - AMD’s tool for installing drivers and other software for use with their hardware.
Just today, I switched to the 7900 XTX GPU. I mostly just wanted an upgrade, but I also secretly hoped it would fix a lot of the weird GPU-related issues I’ve had over the past years.
The 5700 XT is a rather buggy GPU as far as I can tell - especially on Linux which is my only OS on my desktop. I’ve run into multiple bugs with drivers and other mysterious green-screen crashes:
TL;DR:
uPlot is a Spartan charting library that focuses intensely on minimalism and performance. It feels very much like a tool made for hackers, and it lacks many of the features and embellishments of fully-featured charting libraries.
The main downside is that it has quite terrible docs and sometimes has confusing APIs
I’m personally a big fan of its aesthetic and design goals, and I will probably be sticking with it as my primary charting library for the web for the forseeable future.
The main Rust workspace for my job at Osmos is very large. It has several thousand dependencies, does copious compile time codegen from gRPC protobuf definitions, and makes extensive use of macros from crates like serde, async-stream, and many others.
While it’s really convenient having all of our code in one place, this results in a lot of work being done by the Rust compiler as well as rust-analyzer during normal development.
I recently encountered a bug in one of our services at my job at Osmos. Our service is written in Rust and connects to GCP PubSub via its gRPC interface.
We were running into errors in our logs like this:
Error, message length too large: found 5360866 bytes, the limit is: 4194304 bytes The service in question had been running for over 2 years without seeing this issue before, and the message size limits shown are smaller than the PubSub message size cap of 10MB.
I often need to expose some service running locally on my computer to the public internet for some reason or another. Demoing a website, exposing an API, giving someone access to download some local files, stuff like that.
The popular solution for this is tools like ngrok. You download their CLI application to your computer, specify a port, and your local service is available at a URL like https://a78f837.ngrok.io/.
The downside of this is that there are limits for the free versions of these services.