• Hi guest! As you can see, the new Wizard Forums has been revived, and we are glad to have you visiting our site! However, it would be really helpful, both to you and us, if you registered on our website! Registering allows you to see all posts, and make posts yourself, which would be great if you could share your knowledge and opinions with us! You could also make posts to ask questions!

Databending from command line with sox

Incognitus

Lead Transcriber
Joined
May 30, 2021
Messages
341
Reaction score
630
Awards
10
Skull posted about data bending with Audacity. I'd never heard of the term before, but as a long time hacker, I'm definitely familiar with using applications for something other then their intended purpose.

I honestly don't like the Audacity approach. You have to manually select the part of the file to change and it's very easy to overwrite the header and break the image. So as usual, I wanted to find a command line way of doing the same thing. Just faster, and scriptable, etc.

Enter sox. Sox is an audio toolbox for Linux, Mac and Windows. It's the command line toolkit and does everything Audacity does and more.

For Linux, it's easy to install with apt-get install sox or yum install sox (depending on Linux distro). For Mac (I use a MacBook Pro regularly), it's easiest to install through Homebrew (brew install sox). I've never used the Windows version (this would sort of defeat the purpose for me personally), but it's downloadable from Sourceforge (
Please, Log in or Register to view URLs content!
).

I'm starting with this image. It's a 3d fractal I created in Mandelbulb3d quite awhile ago. I named it Buddha, but I haven't posted it anywhere yet as far as I can remember. I converted it to a BMP (same as data bending in Audacity).


I'm doing this in Terminal on a Mac, but it would be the same in Linux, and the flags and such should be the same.

I ran it through some audio filters using sox, using the output from the previous command as the input.

sox -t ul -c 1 -r 48k buddah.bmp -t ul buddah2.bmp trim 0 100s : flanger
sox -t ul -c 1 -r 48k buddah2.bmp -t ul buddah3.bmp trim 0 100s : phaser 0.3 0.9 1 0.5 0.2 -t
sox -t ul -c 1 -r 48k buddah3.bmp -t ul buddah-glitch.bmp trim 0 100s : sinc 250-35
The -t sets the data type to u-law, -c 1 sets the number of channels to 1 (mono) and -r48k is the resample rate. Everything after the second bmp filename are audio filters.

The "trim 0 100s" is part of the magic. It skips the BMP header and works perfect every time. Flanger, phaser and sinc are filters.

From the sox documentation...

Flanger

The flanger effect is like the chorus effect, but the delay varies between 0ms and maximal 5ms. It sound like wind blowing, sometimes faster or slower including changes of the speed.

The flanger effect is widely used in funk and soul music, where the guitar sound varies frequently slow or a bit faster.
Phaser

The phaser effect is like the flanger effect, but it uses a reverb instead of an echo and does phase shifting.
Sinc is a bandpass filter.

There's also an echo filter, reverse, etc, but I didn't use any of them on this image. This is the result of those 2 audio filters.


Pretty happy with the results.
 

Incognitus

Lead Transcriber
Joined
May 30, 2021
Messages
341
Reaction score
630
Awards
10
For a bonus, did an animated GIF.

d9tMn1u.gif
 

Incognitus

Lead Transcriber
Joined
May 30, 2021
Messages
341
Reaction score
630
Awards
10
Just realized I've been linking these images all wrong. Here's the before and after images from the first post.

Before:

Ygm12LS.jpg


After:

tPZG0cT.jpg


Note the bind rune in the lower left corner that I mentioned in another thread.
 

Incognitus

Lead Transcriber
Joined
May 30, 2021
Messages
341
Reaction score
630
Awards
10
Another one. This is another Mandelbulb3d fractal, and is one of the dozens I've done that have included the seal of King Paimon. That fractal was run through a custom Tensorflow style transfer script I wrote (AI based edge detection mostly). Then it was glitched using sox using the flanger filter.

A2YL8SN.jpg
 

Jarhyn

Acolyte
Joined
Jan 27, 2022
Messages
289
Reaction score
259
Awards
3
Another one. This is another Mandelbulb3d fractal, and is one of the dozens I've done that have included the seal of King Paimon. That fractal was run through a custom Tensorflow style transfer script I wrote (AI based edge detection mostly). Then it was glitched using sox using the flanger filter.

A2YL8SN.jpg
So, I know that it isn't the intent but... How fast can these fractals be generated, can the parameters be put into a field, and how minutely or "nearly the same image" can you get through controlled manipulations of the parameters?

Could I for instance take an image, "fractalize" it, alter a parameter a little bit, and get something that is nearly the same image with slightly different angles?

I'm just thinking the use of this to generate perlin noise for instantiating initial connection weight graphs in neural stacks, to allow them to evolve a little bit more linear fashion than what is normally seen using "high entropy" approaches where the whole network gets respun.
 

Incognitus

Lead Transcriber
Joined
May 30, 2021
Messages
341
Reaction score
630
Awards
10
The fractal itself took several hours and just about every parameter is modifiable. Mandelbulb3d is a free Windows 3d fractal app and you can get it from here:
Please, Log in or Register to view URLs content!
. I do more in JWildfire and Mandelbulber (despite the similar name, a much different piece of software), but for all of them the amount of options is near overwhelming. All of the apps can create animations, and in the case of Mandelbulber you can even animate fractal creation set to music. Really cool stuff. Like most fractal artists I originally started with Apophysis, but I moved on from the software a long time ago.

Could I for instance take an image, "fractalize" it, alter a parameter a little bit, and get something that is nearly the same image with slightly different angles?
In this case, the fractal is an intentionally created image, rather than an image or something that was fractalized after creation, but you can change just about everything while creating the fractal, including setting the angle of the camera or fractal itself to make it render at different angles. JWildfire is really good at this and actually has function graphs for almost all settings that you can tie to animations.

The second part was a tensor flow style transfer script I wrote based off this tutorial:
Please, Log in or Register to view URLs content!
. In this case, I used another fractal of mine as the "Style", and it's what created all the curlicues and swirls in the image. That can be modified in a lot of different ways, including only choosing specific layers from the neural model, modifying weights, etc. I'll admit, I don't understand all of it yet, I find it super complicated but a lot of fun to mess with and I'm slowly picking it up.

It's interesting you mention generating noise. While learning Tensorflow and how to do things like DeepDream type images with custom trained models, some of my tests included starting with an image of random monochrome noise, and then ran it through an already trained model that I fine tuned against 1000's of sigil images. This was when I was using Caffe for the neural network and I never finished what I was working on because I switched to Tensorflow instead, which is much easier to use. I started with this, but with a 1024x768 version that I can't seem to find.

UT9uQMk.jpg


After running through Caffe, this was the result (specifically against a pre-trained image model fine tuned with copies of the sigil for Valefor.

TteYiSJ.jpg


Many of the lines and such are directly due to the fine tuning on sigil images.
 

SkullTraill

Glorious Light of Knowledge and Power
Staff member
Custodian
Librarian
Joined
Apr 12, 2021
Messages
1,928
Reaction score
16,044
Awards
19
Very nice! Mind you, audacity was just during my introductory phase to glitch art. It was useful in the sense I could explore what kind of possibilities lied within the programmatical and generative manipulations of images using software. I never moved to commandline stuff because frankly I just started coding my own filters and manipulations in python, Processing, Max etc.

But thanks for sharing about sox! I can definitely see how it would be way more useful/convenient than Audacity for applying audio filters to images!
 
Top