当然,这里是一个更复杂的Python代码示例,它实现了一个带有图像增强和自定义损失函数的卷积神经网络,用于CIFAR-10数据集上的图像分类任务: ```python import numpy as np import tensorflow as tf from tensorflow.keras.datasets import cifar10 from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, BatchNormalization from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.optimizers import Adam from tensorflow.keras.losses import CategoricalCrossentropy from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint # 加载数据 (x_train, y_train), (x_test, y_test) = cifar10.load_data() # 归一化 x_train = x_train.astype('float32') / 255.0 x_test = x_test.astype('float32') / 255.0 # 标签编码为one-hot y_train = tf.keras.utils.to_categorical(y_train, 10) y_test = tf.keras.utils.to_categorical(y_test, 10) # 数据增强 datagen = ImageDataGenerator( horizontal_flip=True, width_shift_range=0.1, height_shift_range=0.1, zoom_range=0.1, rotation_range=10, shear_range=0.1, ) datagen.fit(x_train) # 创建模型 model = Sequential([ Conv2D(32, (3, 3), padding='same', activation='relu', input_shape=(32, 32, 3)), BatchNormalization(), Conv2D(32, (3, 3), activation='relu'), BatchNormalization(), MaxPooling2D((2, 2)), Dropout(0.25), Conv2D(64, (3, 3), padding='same', activation='relu'), BatchNormalization(), Conv2D(64, (3, 3), activation='relu'), BatchNormalization(), MaxPooling2D((2, 2)), Dropout(0.25), Flatten(), Dense(128, activation='relu'), Dropout(0.5), Dense(10, activation='softmax') ]) # 编译模型 learning_rate_scheduler = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.0001) early_stopping = EarlyStopping(monitor='val_loss', patience=10, restore_best_weights=True) checkpoint = ModelCheckpoint('best_model.h5', monitor='val_loss', save_best_only=True, mode='min') model.compile(optimizer=Adam(learning_rate=0.001), loss=CategoricalCrossentropy(), metrics=['accuracy']) # 训练模型 history = model.fit( datagen.flow(x_train, y_train, batch_size=64), epochs=50, validation_data=(x_test, y_test), callbacks=[learning_rate_scheduler, early_stopping, checkpoint] ) # 评估模型 test_loss, test_acc = model.evaluate(x_test, y_test, verbose=2) print(f'Test accuracy: {test_acc}') ``` 这段代码使用了TensorFlow和Keras库来构建和训练一个卷积神经网络。主要特点包括: 1. **数据增强**:利用 `ImageDataGenerator` 对训练数据进行实时增强,以提高模型的泛化能力。 2. **批量归一化**:在每个卷积层之后添加批量归一化层,以加速收敛。 3. **学习率调度器**:使用 `ReduceLROnPlateau` 回调函数动态调整学习率。 4. **早停法**:使用 `EarlyStopping` 回调函数防止过拟合。 5. **模型检查点**:在验证损失最低时保存模型,以便后续加载最佳模型。
评论区: