Coder Social home page Coder Social logo

speech-separation-tf2's People

Contributors

r06944010 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

speech-separation-tf2's Issues

Questions regarding the implementation

Hi,
Thank you for sharing your implementation. After reading your code, I have a couple of questions. One is that in the clustering part, your code actually implements the Adanet approach proposed in the deep attractor paper, but the the paper (improved source separation...) uses K-means (they claim that it's from deep attractor, but deep attractor does not actually use k-means in Adanet). So did you achieve the paper performance using your implemented method?

Another question is that your mask does not use any nonlinear function, such as sigmoid or softmax, but DAnet actually uses these. Did you try nonlinear functions for the mask?

Thank you so much.

question

i can not see the part of code of k-means in your project,can you tell me where it is?

Training on Libri2Mix

Thank you for sharing your codes.

I have make some changes to dataset.py, to make it process Libri2Mix dataset. And I also delete codes about fix perm in main.py.

But the test result of cross-domain model trained on 3s segments from Libri2Mix train-100 is 11.5dB(N=4, batch_size=4, early stop is uesd for 10 epochs, other settings are same). In [1] Conv-Tasnet gets nearly 13dB.

Here is the datasets codes:

`

class Libri2Mix():
def __init__(self, config, mode):
    self.mode = mode
    self.config = config
    self.spk = config['training']['n_speaker']
    self.sample_rate = config['dataset']['sample_rate']
    self.path = config['dataset']['path']
    # print(self.path)
    self.batch_size = config['training']['batch_size']
    self.num_stacks = config['model']['num_stacks']
    self.dilations = [2 ** i for i in range(0, config['model']['dilations'] + 1)]

    self.input_length = config['model']['input_length']
    self.num_residual_blocks = len(self.dilations) * self.num_stacks

    if config['training']['path'] == "models/test":
        self.tfr = os.path.join(config['dataset']['path'], 'tfrecord', mode + '_' + 'debug.tfr')
    else:
        self.tfr = os.path.join(config['dataset']['path'], 'tfrecord',
                                str(config['training']['n_speaker']) + 'spk' + '_' + mode + '_' + str(
                                    self.input_length) + '.tfr')

    if not os.path.isfile(self.tfr) or config['dataset']['load_in_mem']:
        print('Find no {}'.format(self.tfr))
        print('[*] Load {} dataset in memory [*]'.format(self.mode))
        self.load_into_memory()

def decode_dataset(self, serialized_example):
    example = tf.parse_single_example(
        serialized_example,
        features={
            "s1": tf.VarLenFeature(tf.float32),
            "s2": tf.VarLenFeature(tf.float32)
        },
    )
    s1 = tf.sparse_tensor_to_dense(example["s1"])
    s2 = tf.sparse_tensor_to_dense(example["s2"])
    audios = tf.stack([s1, s2])
    return audios

def get_iterator(self):
    if os.path.isfile(self.tfr) and not self.config['dataset']['load_in_mem']:
        print("Loading data from \033[93m{} \033[0m".format(self.tfr))
        with tf.name_scope("input"):
            dataset = tf.data.TFRecordDataset(self.tfr).map(self.decode_dataset)
            if self.mode == "tr" or self.mode == "cv":
                dataset = dataset.shuffle(self.batch_size * 100)
            dataset = dataset.batch(self.batch_size, drop_remainder=True)
            dataset = dataset.prefetch(self.batch_size * 10)
            self.iterator = dataset.make_initializable_iterator()
            return self.iterator.get_next()
    else:
        return None

def load_into_memory(self):
    self.file_paths = {}
    self.sequences = {'tr': {'a': [], 'b': []}, 'cv': {'a': [], 'b': []}, 'tt': {'a': [], 'b': []}}
    if self.mode == "tr":
        self.path += "train-100"
    elif self.mode == "cv":
        self.path += "dev"
    for spk in ['a', 'b']:
        if spk == 'a':
            files = os.listdir(self.path + "/s1")
            spk1 = "/s1/"
        else:
            files = os.listdir(self.path + "/s2")
            spk1 = "/s2/"
        sequences = self.load_directory(files, spk1)
        self.sequences[self.mode][spk] = sequences

def load_directory(self, filenames, spk):
    sequences = []
    for filename in tqdm(filenames):
        # print(self.path + spk + filename)
        sequence = util.load_wav(self.path + spk + filename, self.sample_rate)
        length = len(sequence)
        if length < self.input_length:
            continue
        sequences.append(sequence)
    # sequences = np.array(sequences)

    return sequences

def get_next(self):
    n_data = {'tr': 13900, 'cv': 3000, 'tt': 3000}

    indices = np.arange((n_data[self.mode] + self.batch_size - 1) // self.batch_size * self.batch_size)
    indices %= n_data[self.mode]
    np.random.shuffle(indices)

    for i in range(len(indices) // self.batch_size):
        sample_indices = indices[i * self.batch_size:(i + 1) * self.batch_size]
        batch_inputs = []

        for i, sample_i in enumerate(sample_indices):
            speech_a = self.sequences[self.mode]['a'][sample_i]
            speech_b = self.sequences[self.mode]['b'][sample_i]
            offset = np.squeeze(np.random.randint(0, len(speech_a) - self.input_length + 1, 1))
            output_a = speech_a[offset:offset + self.input_length]
            output_b = speech_b[offset:offset + self.input_length]
            batch_inputs.append([output_a, output_b])
        batch_inputs = np.array(batch_inputs, dtype='float32')
        yield batch_inputs, sample_indices

`

Could you please tell me if I have done something wrong, thank you!

[1] Joris Cosentino, Manuel Pariente, Samuele Cornell, Antoine Deleforge, and Emmanuel Vincent, “LibriMix: An Open-Source Dataset for Generalizable Speech Separation,” arXiv preprint arXiv:2005.11262, 2020.

Question - How to separate WSJ0-2MIX

issue 1:
(tensorflow-LY) D:\LY\work_place\Speech-Separation-TF2-master>python main.py -m test -c D:\LY\work_place\Speech-Separation-TF2-master\models\cdnet\cdnet.json -ckpt D:\LY\work_place\Speech-Separation-TF2-master\models\cdnet\generat.ckpt\checkpoint

Then
2021-12-01 22:03:00.251526: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open D:\LY\work_place\Speech-Separation-TF2-master\m
odels\cdnet\generat.ckpt\checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you n
eed to use a different restore operator?
2021-12-01 22:03:00.257818: W tensorflow/core/util/tensor_slice_reader.cc:95] Could not open D:\LY\work_place\Speech-Separation-TF2-master\m
odels\cdnet\generat.ckpt\checkpoint: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you n
eed to use a different restore operator?
2021-12-01 22:03:00.258167: W tensorflow/core/framework/op_kernel.cc:1622] OP_REQUIRES failed at save_restore_tensor.cc:175 : Data loss: Una
ble to open table file D:\LY\work_place\Speech-Separation-TF2-master\models\cdnet\generat.ckpt\checkpoint: Data loss: not an sstable (bad ma
gic number): perhaps your file is in a different file format and you need to use a different restore operator?
I look forward to your reply

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.