ShawnHymel / perfect-toast-machine Public

Training settings

Please provide a valid number of training cycles (numeric only)
Please provide a valid number for the learning rate (between 0 and 1)
Please provide a valid training processor option

Augmentation settings

Advanced training settings

Neural network architecture

import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, InputLayer, Dropout, Conv1D, Conv2D, Flatten, Reshape, MaxPooling1D, MaxPooling2D, BatchNormalization, TimeDistributed, DepthwiseConv1D, Reshape from tensorflow.keras.optimizers import Adam EPOCHS = 500 # this controls the batch size, or you can manipulate the tf.data.Dataset objects yourself BATCH_SIZE = 32 train_dataset = train_dataset.batch(BATCH_SIZE, drop_remainder=False) validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=False) # model architecture model = Sequential() # # Middle layers # model.add(Reshape(target_shape=(20, 9), # input_shape=(180,))) # model.add(DepthwiseConv1D(3, # kernel size # strides=1, # padding='same', # depth_multiplier=1, # data_format='channels_last', # activation='relu')) # # Flatten and DNN for classification # model.add(Flatten()) # model.add(Dropout(0.25)) # model.add(Dense(80, # activation=tf.keras.activations.relu)) # model.add(Dropout(0.25)) model.add(Dense(80, activation='relu')) model.add(Dropout(0.25)) model.add(Dense(40, activation='relu')) model.add(Dropout(0.25)) # Final model (for regression) model.add(Dense(classes, name='y_pred', activation='linear')) # this controls the learning rate opt = Adam(learning_rate=0.005, beta_1=0.9, beta_2=0.999) callbacks.append(BatchLoggerCallback(BATCH_SIZE, train_sample_count, epochs=EPOCHS)) # train the neural network model.compile(loss='mean_squared_error', optimizer=opt) model.fit(train_dataset, epochs=EPOCHS, validation_data=validation_dataset, verbose=2, callbacks=callbacks) # Use this flag to disable per-channel quantization for a model. # This can reduce RAM usage for convolutional models, but may have # an impact on accuracy. disable_per_channel_quantization = False
Input layer (180 features)
Dense layer (80 neurons)
Dropout (rate 0.25)
Dense layer (40 neurons)
Dropout (rate 0.25)
Output layer (1 value)

Model

Model version: