The discriminator I used is similar to what was described in https://arxiv.org/abs/2012.07267 and https://arxiv.org/abs/1910.06711. The input to the discriminator is f0, amplitudes, harmonic distribution, and the noise magnitudes, conditioned on the conditioning signals (note expression controls). The discriminator uses three identical discriminator networks on three scales of data: the original, average pooled 2x, and average pooled 4x. Each discriminator network uses 4 blocks and each block consists of two 1x3 conv blocks with residual connections and LeakyRelu (similar to the one used in https://arxiv.org/abs/2012.07267). The discriminator only applies on the output other than f0, the f0 is still learned and generated by an autoregressive RNN. The discriminator training objective is Least Square GAN, and the generator training objective is Least Square GAN loss, reconstruction loss (spectral loss), f0 cross entropy loss and feature matching loss of the discriminator feature map.

The best model that I tuned now uses a smaller learning rate than the generator (1e-4 vs. 3e-4) and blocks the gradient from discriminator to f0 autoregressive RNN. I also tried adding noise to the generator by using noise as input to the dilated conv, and use the conditioning vector as conditioning. It has similar results, for which I can't really tell the difference.

Here are the results: It can be heard from the samples that it is now more similar to the timbre of the original recording. And looking from the harmonic distribution and noise magnitudes plots, there is no more over-smoothing.

Original recording:

In [2]:
import utils.audio_io
In [3]:
wav = utils.audio_io.load_audio(r'/data/ddsp-experiment/logs/logs/ref.wav', 16000)
plot_spec(wav, hp.sample_rate, title='')

Audio sample of old model that have oversmoothing problem:

In [4]:
wav = utils.audio_io.load_audio(r'/data/ddsp-experiment/logs/logs/pred_cnn.wav', 16000)
plot_spec(wav, hp.sample_rate, title='')

Audio sample of GAN model:

In [13]:
plot_spec(outputs['midi_audio'][0].numpy(), sample_rate)

Audio sample of GAN model with noise as input:

In [3]:
wav = utils.audio_io.load_audio(r'/data/ddsp-experiment/logs/logs/pred_noise_gan.wav', 16000)
plot_spec(wav, hp.sample_rate, title='')

DDSP Inference sample:

In [14]:
plot_spec(outputs['synth_audio'][0].numpy(), sample_rate)
In [ ]:
 
In [15]:
params_pred = outputs['params_pred']
midi_synth_params = {'amplitudes': params_pred['amplitudes'],
                       'harmonic_distribution': params_pred[
                           'harmonic_distribution'],
                       'noise_magnitudes': params_pred['noise_magnitudes'],
                       'f0_hz': params_pred['f0_hz'], }
synth_params = outputs['synth_params']
In [16]:
import librosa
import librosa.display

Harmonic distribution plot (GAN model without noise input)

The harmonic distribution of the DDSP inference (autoencoder):

In [17]:
plt.figure(figsize=(20,4))
librosa.display.specshow(synth_params['harmonic_distribution'].numpy()[0].T)
plt.colorbar()
Out[17]:
<matplotlib.colorbar.Colorbar at 0x7fb26c0ccb50>

The harmonic distribution of the synthesizer parameters generator:

In [18]:
plt.figure(figsize=(20,4))
librosa.display.specshow(midi_synth_params['harmonic_distribution'].numpy()[0].T)
plt.colorbar()
Out[18]:
<matplotlib.colorbar.Colorbar at 0x7fb26c053b90>

The noise magnitudes of the DDSP inference (autoencoder):

In [19]:
plt.figure(figsize=(20,4))
librosa.display.specshow(synth_params['noise_magnitudes'].numpy()[0].T)
plt.colorbar()
Out[19]:
<matplotlib.colorbar.Colorbar at 0x7fb2607b3c50>

The harmonic distribution of the synthesizer parameters generator:

In [20]:
plt.figure(figsize=(20,4))
librosa.display.specshow(midi_synth_params['noise_magnitudes'].numpy()[0].T)
plt.colorbar()
Out[20]:
<matplotlib.colorbar.Colorbar at 0x7fb2606ec4d0>
In [ ]: