Skip to main content

Deep learning A.I. can imitate the distortion effects of iconic guitar gods

 

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

Recommended Videos

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

“It has been the general belief of audio researchers for decades that the accurate imitation of the distorted sound of tube guitar amplifiers is very challenging,” Professor Vesa Välimäki told Digital Trends. “One reason is that the distortion is related to dynamic nonlinear behavior, which is known to be hard to simulate even theoretically. Another reason may be that distorted guitar sounds are usually quite prominent in music, so it appears difficult to hide any problems there; all inaccuracies will be very noticeable.”

guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1
Researchers recorded the guitar effects in a special anechoic chamber. Mikko Raskinen

To train the neural network to recreate a variety of distortion effects, all that is needed is a few minutes of audio recorded from the target amplifier. The researchers used “clean” audio recorded from an electric guitar in an anechoic chamber, and then ran it through an amplifier. This provided both an input in the form of the unblemished guitar sound, and an output in the form of the corresponding “target” guitar amplifier output.

“Training is done by feeding the neural network a short segment of clean guitar audio, and comparing the network’s output to the ‘target’ amplifier output,” Alec Wright, a doctoral student focused on audio processing using deep learning, told Digital Trends. “This comparison is done in the ‘loss function,’ which is simply an equation that represents how far the neural network output is from the target output, or, how ‘wrong’ the neural network model’s prediction was. The key is a process called ‘gradient descent,’ where you calculate how to adjust the neural network’s parameters very slightly, so that the neural network’s prediction is slightly closer to the target amplifier’s output. This process is then repeated thousands of times — or sometimes much more — until the neural network’s output stops improving.”

You can check out a demo of the A.I. in action at research.spa.aalto.fi/publications/papers/applsci-deep/. A paper describing the work was recently published in the journal Applied Sciences.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Here’s what a trend-analyzing A.I. thinks will be the next big thing in tech
brain network on veins illustration

Virtual and augmented reality. 3D printing. Natural language processing. Deep learning. The smart home. Driverless vehicles. Biometric technology. Genetically modified organisms. Brain-computer interfaces.

These, in descending order, are the top 10 most-invested-in emerging technologies in the United States, as ranked by number of deals. If you want to get a sense of which technologies will be shaping our future in the years to come, this probably isn’t a bad starting point.

Read more
Nvidia lowers the barrier to entry into A.I. with Fleet Command and LaunchPad
laptop running Nvidia Fleet Command software.

Nvidia is expanding its artificial intelligence (A.I.) offerings as part of its continued effort to "democratize A.I." The company announced two new programs today that can help businesses of any size to train and deploy A.I. models without investing in infrastructure. The first is A.I. LaunchPad, which gives enterprises access to a stack of A.I. infrastructure and software, and the second is Fleet Command, which helps businesses deploy and manage the A.I. models they've trained.

At Computex 2021, Nvidia announced the Base Command platform that allows businesses to train A.I. models on Nvidia's DGX SuperPod supercomputer.  Fleet Command builds on this platform by allowing users to simulate A.I. models and deploy them across edge devices remotely. With an Nvidia-certified system, admins can now control the entire life cycle of A.I. training and edge deployment without the upfront cost.

Read more
Can A.I. beat human engineers at designing microchips? Google thinks so
google artificial intelligence designs microchips photo 1494083306499 e22e4a457632

Could artificial intelligence be better at designing chips than human experts? A group of researchers from Google's Brain Team attempted to answer this question and came back with interesting findings. It turns out that a well-trained A.I. is capable of designing computer microchips -- and with great results. So great, in fact, that Google's next generation of A.I. computer systems will include microchips created with the help of this experiment.

Azalia Mirhoseini, one of the computer scientists of Google Research's Brain Team, explained the approach in an issue of Nature together with several colleagues. Artificial intelligence usually has an easy time beating a human mind when it comes to games such as chess. Some might say that A.I. can't think like a human, but in the case of microchips, this proved to be the key to finding some out-of-the-box solutions.

Read more