Edge Impulse Experts / Snoring Detection (Syntiant NDP120) Public

Training settings

Please provide a valid number of training cycles (numeric only)
Please provide a valid number for the learning rate (between 0 and 1)
Please provide a valid training processor option

Augmentation settings

Advanced training settings

Audio training options

Neural network architecture

import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, InputLayer, Dropout, Conv1D, Conv2D, Flatten, Reshape, MaxPooling1D, MaxPooling2D, AveragePooling2D, BatchNormalization, Permute, ReLU, Softmax from tensorflow.keras.optimizers.legacy import Adam EPOCHS = args.epochs or 100 LEARNING_RATE = args.learning_rate or 0.0005 # If True, non-deterministic functions (e.g. shuffling batches) are not used. # This is False by default. ENSURE_DETERMINISM = args.ensure_determinism # this controls the batch size, or you can manipulate the tf.data.Dataset objects yourself BATCH_SIZE = args.batch_size or 32 if not ENSURE_DETERMINISM: train_dataset = train_dataset.shuffle(buffer_size=BATCH_SIZE*4) train_dataset=train_dataset.batch(BATCH_SIZE, drop_remainder=False) validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=False) # model architecture model = Sequential() channels = 1 columns = 40 rows = int(input_length / (columns * channels)) model.add(Reshape((rows, columns, channels), input_shape=(input_length, ))) model.add(Permute((2,1,3))) # H and W are reversed for NDP120 Conv 2D input # Syntiant TDK supports only valid padding model.add(Conv2D(32, kernel_size=5, kernel_constraint=tf.keras.constraints.MaxNorm(1), padding='valid', activation='relu')) model.add(MaxPooling2D(pool_size=2, strides=2, padding='valid')) model.add(Dropout(0.25)) # Syntiant TDK supports only valid padding model.add(Conv2D(16, kernel_size=3, kernel_constraint=tf.keras.constraints.MaxNorm(1), padding='valid', activation='relu')) model.add(MaxPooling2D(pool_size=2, strides=2, padding='valid')) model.add(Dropout(0.25)) # Flatten 1 dimension - Syntiant TDK only supports 2 dimensions with data model.add(AveragePooling2D(pool_size=(model.layers[-1].output_shape[1], 1), strides=1, padding="valid")) model.add(Flatten()) model.add(Dense(16, activation='relu', activity_regularizer=tf.keras.regularizers.l1(0.00001))) model.add(Dropout(0.5)) model.add(Dense(8, activation='relu', activity_regularizer=tf.keras.regularizers.l1(0.00001))) model.add(Dropout(0.5)) model.add(Dense(classes, name='y_pred', activation='softmax')) # this controls the learning rate opt = Adam(learning_rate=LEARNING_RATE, beta_1=0.9, beta_2=0.999) callbacks.append(BatchLoggerCallback(BATCH_SIZE, train_sample_count, epochs=EPOCHS, ensure_determinism=ENSURE_DETERMINISM)) # train the neural network model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['accuracy']) model.fit(train_dataset, epochs=EPOCHS, validation_data=validation_dataset, verbose=2, callbacks=callbacks) # Use this flag to disable per-channel quantization for a model. # This can reduce RAM usage for convolutional models, but may have # an impact on accuracy. disable_per_channel_quantization = False
Input layer (1,600 features)
Reshape layer (40 columns)
2D conv / pool layer (32 filters, 5 kernel size, 1 layer)
Dropout (rate 0.25)
2D conv / pool layer (16 filters, 3 kernel size, 1 layer)
Dropout (rate 0.25)
Flatten layer
Dense layer (16 neurons)
Dropout (rate 0.5)
Dense layer (8 neurons)
Dropout (rate 0.5)
Output layer (2 classes)

Model

Model version: