I bumped several of the dependencies for one of my projects. It uses Svelte and Webpack, and makes use of svelte-loader to facilitate importing .svelte files.
I upgraded Svelte from v3.57.0 to v4.2.0, and bumped svelte-loader, svelte-preprocess, prettier-plugin-svelte, and many other libraries to their latest versions as well.
After the upgrade, my Webpack dev server started up but failed to load with many errors like these displayed in the console:
I’ve been using a tool called radeontop for years to monitor the performance and utilization of my AMD GPUs on Linux. It’s a TUI-based application that renders the value of different performance counters as bar charts:
For the most part, it does a good job and it provides a concise overview of GPU utilization.
However, it seems that radeontop is no longer actively developed/updated.
It’s received ~7 commits in the past ~3 years and although it does still mostly work even with the latest GPUs like the 7900 XTX, but it’s not under active development.
I’m picking back up the work that I started last year building 3D scenes and sketches with Three.JS.
At that time, it was just after AI image generators like DALL-E and Stable Diffusion were really taking off. I had success running Stable Diffusion locally and using it to generate textures for terrain, buildings, and other environments in the 3D worlds I was building.
I was using Stable Diffusion v1 back then.
I recently finished a big blog post about growing sparse computational graphs with RNNs.
An important part of that work involved creating a custom RNN architecture to facilitate the growth of extremely sparse networks. To help explain that custom RNN architecture in the blog post, I created some visualizations that looked like this:
These images are SVGs, so they scale infinitely without getting pixelated or blurry. I looked into a few different options for generating these including tools like draw.
UPDATE 2023-08-15:
Some engineers at Google reached out to me via e-mail after I submitted some feedback about this issue on the GCP console and linked to this blog post.
After a few back and forth messages, they were able to diagnose the problem and put out a mitigation that completely fixed it for us!
The issue seems to have stemmed from cursors tracking the position in the streaming buffer getting out of sync between the BI engine and base BigQuery.
I recently upgraded to a 7900 XTX GPU. Besides being great for gaming, I wanted to try it out for some machine learning.
It’s well known that NVIDIA is the clear leader in AI hardware currently. Most ML frameworks have NVIDIA support via CUDA as their primary (or only) option for acceleration. OpenCL has not been up to the same level in either support or performance.
That being said, the 7900 XTX is a very powerful card.
We ran into this error at my job at Osmos. We upload files to GCS using their JSON-based REST API. Everything was working just fine until we tried uploading a large-ish file of ~2.5GB.
We upload to this API route: https://storage.googleapis.com/upload/storage/v1/b/bucket-name/o?name=file_name.csv&uploadType=media
The Problem When we tried to upload the data, we got a HTML page as a response with a 400 error code and this unhelpful error message:
400. That's an error.
The Problem I recently updated the packages on my Debian Linux install with sudo apt upgrade.
After that, I rebooted and tried to launch League of Legends through Lutris as I have hundreds of times. The client failed to launch with this error printed in the logs:
DRI driver not from this Mesa build ('23.1.0-devel' vs '23.1.2-1') The Cause I recently installed AMD ROCm using amdgpu-install so I could do some machine learning with my AMD 7900 XTX GPU.
I’ve been experimenting with OpenCL via pyopencl recently. They provide a nice interface for enumerating available devices and getting information about them, and then using them to run OpenCL code:
>>> import pyopencl as cl >>> platform = cl.get_platforms()[0] >>> platform.get_devices() [ <pyopencl.Device 'gfx1100' on 'AMD Accelerated Parallel Processing' at 0x56353125bd70>, <pyopencl.Device 'gfx1036' on 'AMD Accelerated Parallel Processing' at 0x5635312ec670> ] >>> There are two devices for me because I have one discrete 7900 XTX GPU as well as an integrated GPU on my 7950X CPU.
I recently upgraded to a 7900 XTX GPU. The upgrade itself went quite smoothly from both a hardware and software perspective. Games worked great out of the box with no driver or other configuration needed - as plug and play as it could possibly get.
However, I wanted to try out some machine learning on it. I’d been using TensorFlow.JS to train models using my GPU all in the browser, but that approach is limited compared to what’s possible when running it natively.