38/100 [==========>.] - ETA: 38s - loss: 2.6420 - rpn_class_loss: 0.0288 - rpn_bbox_loss: 1.0662 - mrcnn_class_loss: 0.2348 - mrcnn_bbox_loss: 0.7308 - mrcnn_mask_loss: 0.5814 state = deepcopy(state, memo) y = copier(x, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy This is probably due to `ModelCheckpoint.save_weights_only` being set to `True`.' If I just want to run eval on a pl.Module should I avoid making a trainer? File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy y = copier(x, memo) Was the release of "Barbie" intentionally coordinated to be on the same day as "Oppenheimer"? y = _reconstruct(x, rv, 1, memo) How weights of all CNN models are same even when using different models, Keras 1D CNN always predicts the same result even if accuracy is high on training set. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct y.append(deepcopy(a, memo)) 57/100 [================>.] - ETA: 25s - loss: 2.4039 - rpn_class_loss: 0.0256 - rpn_bbox_loss: 0.8141 - mrcnn_class_loss: 0.1937 - mrcnn_bbox_loss: 0.7492 - mrcnn_mask_loss: 0.6213 File "/home/jgq/anaconda3/envs/python34/lib/python3.4/site-packages/keras/models.py", line 107, in save_model File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy y.append(deepcopy(a, memo)) y = _reconstruct(x, rv, 1, memo) y = copier(x, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct MINI_MASK_SHAPE (56, 56) mrcnn_class_logits (TimeDistributed) y = _reconstruct(x, rv, 1, memo) I am having this problem, I set save_weights_only=True and now if I try to do resume_from_checkpoint= '/path/file.ckpt' it raise the following error: 'Trying to restore training state but checkpoint contains only the model. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list change save_weight_only to save_best_only caused problem #530 - GitHub File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy 32/100 [========>] - ETA: 43s - loss: 2.7963 - rpn_class_loss: 0.0315 - rpn_bbox_loss: 1.2160 - mrcnn_class_loss: 0.2501 - mrcnn_bbox_loss: 0.7122 - mrcnn_mask_loss: 0.5864 'config': model.get_config() If you wanted to find the best model over N iterations you should save them with a prefix N in the file name. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy y[deepcopy(key, memo)] = deepcopy(value, memo) By clicking Post Your Answer, you agree to our terms of service and acknowledge that you have read and understand our privacy policy and code of conduct. y = _reconstruct(x, rv, 1, memo) WeightWatchers membership: Join today and save on 10 months starting prepare train data File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict y = copier(x, memo) In summary, saving the weights during training allows you to persist the state of the model, so that you can continue training or use the model for predictions later. 85/100 [========================>..] - ETA: 8s - loss: 2.0850 - rpn_class_loss: 0.0229 - rpn_bbox_loss: 0.6213 - mrcnn_class_loss: 0.1548 - mrcnn_bbox_loss: 0.7068 - mrcnn_mask_loss: 0.5791 Learn more about Stack Overflow the company, and our products. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1) Let's discuss in detail each of its arguments: filepath: This is the path to save your model. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy # keras.callbacks.ModelCheckpoint(self.checkpoint_path, File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy Here is the documentation. On the other hand, with ModelCheckpoint, you have more control over when to save the weights, but you have to manually stop the training when the performance is no longer improving. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict y = copier(x, memo) y[deepcopy(key, memo)] = deepcopy(value, memo) What could go wrong from just loading the weights when trying to keep training the same model, but in a different session (closing python and continuing training some other day, for example). y = copier(x, memo) layers='heads') load_from_checkpoint: checkpoint [ 'module_arguments'] KeyError y[deepcopy(key, memo)] = deepcopy(value, memo) rev2023.7.25.43544. y.append(deepcopy(a, memo)) To save a model in Keras, what are the differences between the output files of: The saved file from model.save() is larger than the model from model.save_weights(), but significantly larger than a JSON or Yaml model architecture file. ModelCheckpoint is a callback function used to save model file (h5) after epochs. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy histogram_freq=0, write_graph=True, write_images=False), y.append(deepcopy(a, memo)) FALSE (default) and the whole model is saved, as in calling model.save(). y.append(deepcopy(a, memo)) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list Save_weights_only means it only saves the weights and not the full model. ModelCheckpoint model.fit () () python - Loading weights from HDf5 model - Stack Overflow File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct mrcnn_class_conv2 (TimeDistributed) These weights can be used to make predictions as is or as the basis for ongoing training. state = deepcopy(state, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy y = _reconstruct(x, rv, 1, memo) Traceback (most recent call last): return copy.deepcopy(config) You could just manually add in the N i.e., 1,2,3,N for each fit(). ModelCheckpoint PyTorch Lightning 2.0.5 documentation 37/100 [==========>.] - ETA: 38s - loss: 2.6511 - rpn_class_loss: 0.0279 - rpn_bbox_loss: 1.0910 - mrcnn_class_loss: 0.2358 - mrcnn_bbox_loss: 0.7180 - mrcnn_mask_loss: 0.5784 y.append(deepcopy(a, memo)) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct y = copier(x, memo) Keras: How to restore initial weights when using EarlyStopping, Is this mold/mildew? What is the difference between weights and variables in a Keras Model? y = copier(x, memo) Customize checkpointing behavior (intermediate) - Lightning File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy json_model = model.to_json() 2018-05-08 09:34:54.784468: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2] y = _reconstruct(x, rv, 1, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct The ModelCheckpoint callback class allows you to define where to checkpoint the model weights, how . y.append(deepcopy(a, memo)) y = copier(x, memo) , save_weights_only=True, verbose=1) # Train the model with the new callback model.fit(train_images, train_labels, epochs=10, validation_data=(test_images, test_labels), callbacks=[cp_callback]) # Pass callback to . y = _reconstruct(x, rv, 1, memo) ModelCheckpoint ( filepath=checkpoint_filepath , save_weights_only=False , monitor='loss' , mode='max' , save_best_only=True ) train_mode. Will the fact that you traveled to Pakistan be a problem if you go to India? 60/100 [=================>] - ETA: 23s - loss: 2.3913 - rpn_class_loss: 0.0266 - rpn_bbox_loss: 0.8070 - mrcnn_class_loss: 0.1888 - mrcnn_bbox_loss: 0.7484 - mrcnn_mask_loss: 0.6204 y[deepcopy(key, memo)] = deepcopy(value, memo) BATCH_SIZE 1 state = deepcopy(state, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy 89/100 [=========================>.] - ETA: 6s - loss: 2.0713 - rpn_class_loss: 0.0224 - rpn_bbox_loss: 0.6134 - mrcnn_class_loss: 0.1550 - mrcnn_bbox_loss: 0.7070 - mrcnn_mask_loss: 0.5735 y = copier(x, memo) 593), Stack Overflow at WeAreDevelopers World Congress in Berlin. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy 83/100 [=======================>] - ETA: 9s - loss: 2.1073 - rpn_class_loss: 0.0233 - rpn_bbox_loss: 0.6336 - mrcnn_class_loss: 0.1566 - mrcnn_bbox_loss: 0.7121 - mrcnn_mask_loss: 0.5816 File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict /home/jgq/anaconda3/envs/python34/lib/python3.4/site-packages/keras/engine/training.py:1987: UserWarning: Using a generator with use_multiprocessing=True and multiple workers may duplicate your data. TRUE and weights only are saved, akin to calling model.save_weights(). y = _reconstruct(x, rv, 1, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy 33/100 [========>] - ETA: 42s - loss: 2.7763 - rpn_class_loss: 0.0307 - rpn_bbox_loss: 1.1974 - mrcnn_class_loss: 0.2475 - mrcnn_bbox_loss: 0.7117 - mrcnn_mask_loss: 0.5891 File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy y = _reconstruct(x, rv, 1, memo) 28/100 [=======>.] - ETA: 46s - loss: 2.9125 - rpn_class_loss: 0.0338 - rpn_bbox_loss: 1.3246 - mrcnn_class_loss: 0.2724 - mrcnn_bbox_loss: 0.7119 - mrcnn_mask_loss: 0.5698 Inverting a matrix using the Matrix logarithm. y = _reconstruct(x, rv, 1, memo) y.append(deepcopy(a, memo)) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct y = _reconstruct(x, rv, 1, memo) 20/100 [=====>] - ETA: 56s - loss: 2.9232 - rpn_class_loss: 0.0366 - rpn_bbox_loss: 1.3771 - mrcnn_class_loss: 0.3072 - mrcnn_bbox_loss: 0.6509 - mrcnn_mask_loss: 0.5513 save_weights_only (bool) - if True, then only the model's weights will be saved. To learn more, see our tips on writing great answers. 52/100 [==============>] - ETA: 28s - loss: 2.4968 - rpn_class_loss: 0.0275 - rpn_bbox_loss: 0.8696 - mrcnn_class_loss: 0.1997 - mrcnn_bbox_loss: 0.7650 - mrcnn_mask_loss: 0.6350 XXX.h5) y = copier(x, memo) y = copier(x, memo) @MatiasValdenegro Would you care to explain why one would like to save the state of the optimizer? Depending on the filepath specified, we can either save . Can somebody be charged for having another person physically assault someone for them? Could ChatGPT etcetera undermine community by making statements less significant for us? y.append(deepcopy(a, memo)) y.append(deepcopy(a, memo)) 92/100 [==========================>] - ETA: 4s - loss: 2.0539 - rpn_class_loss: 0.0222 - rpn_bbox_loss: 0.6097 - mrcnn_class_loss: 0.1544 - mrcnn_bbox_loss: 0.7012 - mrcnn_mask_loss: 0.5663 File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy y = copier(x, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list mrcnn_mask_deconv (TimeDistributed) So both 'save_best_only and save_weights_only' have default value as False and will save all weights and full model if not True. Not the answer you're looking for? File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy y = copier(x, memo) "the returned array has changed. y.append(deepcopy(a, memo)) y = copier(x, memo) Airline refuses to issue proper receipt. y = copier(x, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy I know this can fix the issue but i wanted to save the whole model rather than the weights only so i don't need to build the model every time i need to test the model. callbacks = [ y = _reconstruct(x, rv, 1, memo) Reason not to use aluminium wires, other than higher resitance. y = copier(x, memo) y = copier(x, memo) mrcnn_mask_bn3 (TimeDistributed) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy "set_weight_only = True" saves model's weight, rather than the whole model. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list How feasible is a manned flight to Apophis in 2029 using Artemis or Starship? TensorFlow.Keras ModelCheckpoint Saving model while training , why? model.save('my_model.h5') This way, you will save the weights and then when testing you have to build the model and load the weights separately. y = _reconstruct(x, rv, 1, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict In model: rpn_model I am trying to understand what happens here when I use the Keras ModelCheckpoint callback without setting either save_best_only & save_weights_only to True. y = copier(x, memo) You signed in with another tab or window. 54/100 [===============>..] - ETA: 27s - loss: 2.4641 - rpn_class_loss: 0.0266 - rpn_bbox_loss: 0.8463 - mrcnn_class_loss: 0.2004 - mrcnn_bbox_loss: 0.7622 - mrcnn_mask_loss: 0.6286 File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list deep learning - Keras ModelCheckpoint Callback returning weights only state = deepcopy(state, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy y = copier(x, memo) "save_best_only = True" will save all epoch that improve according to the mode ('min' in the above example). y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list y.append(deepcopy(a, memo)) rpn_class_raw (Conv2D) 97/100 [============================>.] Full code: "the returned array has changed. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict mrcnn_mask_conv2 (TimeDistributed) y = copier(x, memo) Does glide ratio improve with increase in scale? Reason not to use aluminium wires, other than higher resitance. fpn_p4 (Conv2D) We read every piece of feedback, and take your input very seriously. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy Tensorflow / Keras - Using both ModelCheckpoint: save_best_only and EarlyStopping: restore_best_weights. is that make sense ? state = deepcopy(state, memo) state = deepcopy(state, memo) I noticed that when I tried to load_weights() according to your instructions. 49/100 [=============>.] - ETA: 30s - loss: 2.5458 - rpn_class_loss: 0.0266 - rpn_bbox_loss: 0.9095 - mrcnn_class_loss: 0.2067 - mrcnn_bbox_loss: 0.7616 - mrcnn_mask_loss: 0.6414 File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy 96/100 [===========================>..] - ETA: 2s - loss: 2.0161 - rpn_class_loss: 0.0218 - rpn_bbox_loss: 0.5911 - mrcnn_class_loss: 0.1504 - mrcnn_bbox_loss: 0.6898 - mrcnn_mask_loss: 0.5630 Tensorflow / Keras - Using both ModelCheckpoint: save_best_only and EarlyStopping: restore_best_weights, What its like to be on the Python Steering Council (Ep. HI, I am using Pytorch Lightning, trying to restore a model, I have de model_epoch=15.ckpt file and would like to restore from here, so I introduced the resume_from_checkpoint in the trainer, but I get . File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy @bahmed11 File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy Is it appropriate to try to contact the referee of a paper after it has been accepted and published? y.append(deepcopy(a, memo)) What are the pitfalls of indirect implicit casting? File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy WandbModelCheckpoint | Weights & Biases Documentation NAME bioisland 25/100 [======>..] - ETA: 50s - loss: 2.8614 - rpn_class_loss: 0.0322 - rpn_bbox_loss: 1.3599 - mrcnn_class_loss: 0.2834 - mrcnn_bbox_loss: 0.6357 - mrcnn_mask_loss: 0.5503 File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict y.append(deepcopy(a, memo)) y[deepcopy(key, memo)] = deepcopy(value, memo) rev2023.7.25.43544. 593), Stack Overflow at WeAreDevelopers World Congress in Berlin, Keras custom layer using tensorflow function, Group neural networks outputs using Keras/Tensorflow. state = deepcopy(state, memo) name: GeForce GTX 1080 y = copier(x, memo) To subscribe to this RSS feed, copy and paste this URL into your RSS reader. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct 593), Stack Overflow at WeAreDevelopers World Congress in Berlin, Temporary policy: Generative AI (e.g., ChatGPT) is banned. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy y = copier(x, memo) y.append(deepcopy(a, memo)) What would naval warfare look like if Dreadnaughts never came to be? ModelCheckpoint keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1) . y[deepcopy(key, memo)] = deepcopy(value, memo) 99/100 [============================>.] restore_best_weights: Whether to restore model weights from the epoch with the best value of the monitored quantity. y = copier(x, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict Data Science Stack Exchange is a question and answer site for Data science professionals, Machine Learning specialists, and those interested in learning more about the field. 55/100 [===============>..] - ETA: 26s - loss: 2.4351 - rpn_class_loss: 0.0262 - rpn_bbox_loss: 0.8310 - mrcnn_class_loss: 0.1980 - mrcnn_bbox_loss: 0.7547 - mrcnn_mask_loss: 0.6253 state = deepcopy(state, memo) state = deepcopy(state, memo) Physical interpretation of the inner product between two quantum states. mrcnn_mask_conv1 (TimeDistributed) Browse other questions tagged, Where developers & technologists share private knowledge with coworkers, Reach developers & technologists worldwide, The future of collective knowledge sharing. y.append(deepcopy(a, memo)) Find centralized, trusted content and collaborate around the technologies you use most. mAP scores on tensorboard (Tensorflow Object Detection API) are all 0 even though the loss value is low, Using class weights in Keras with multiple binary outputs which are not simply one-hot-encoded, How gradients are flown back to Network in siamese architecture? Yes, I believe this is a bug with Tensorflow. 31/100 [========>] - ETA: 44s - loss: 2.8123 - rpn_class_loss: 0.0322 - rpn_bbox_loss: 1.2309 - mrcnn_class_loss: 0.2539 - mrcnn_bbox_loss: 0.7100 - mrcnn_mask_loss: 0.5854 File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list y = _reconstruct(x, rv, 1, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy y = _reconstruct(x, rv, 1, memo) It allows us to see the Callback's functionality in saving model weights during training, based on specific performance metrics. File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list y = copier(x, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 155, in deepcopy File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy then load model by using: File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 246, in _deepcopy_dict File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list What does it mean by "the full model is saved"? If you check the source code for ModelCheckpoint you can see that when save_weights_only=True you are actually calling model.save_weights () where model is an instance of tf.keras.Model. Tensorflow / Keras - Using both ModelCheckpoint: save_best_only and y[deepcopy(key, memo)] = deepcopy(value, memo) This may consume a large amount of memory. 82/100 [=======================>] - ETA: 10s - loss: 2.1098 - rpn_class_loss: 0.0233 - rpn_bbox_loss: 0.6370 - mrcnn_class_loss: 0.1572 - mrcnn_bbox_loss: 0.7177 - mrcnn_mask_loss: 0.5746 y = copier(x, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 219, in _deepcopy_list y = copier(x, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 300, in _reconstruct y.append(deepcopy(a, memo)) y[deepcopy(key, memo)] = deepcopy(value, memo) File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy How do you analyse the rank of a matrix depending on a parameter. Does save_best_only in Keras prevents overfitting? File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy File "/home/jgq/anaconda3/envs/python34/lib/python3.4/copy.py", line 182, in deepcopy HansBambel commented Jun 9, 2020. [ 32 32] y = copier(x, memo) Term meaning multiple different layers across many eras? BACKBONE_SHAPES [[256 256] y.append(deepcopy(a, memo)) y = copier(x, memo) 42/100 [===========>] - ETA: 35s - loss: 2.5803 - rpn_class_loss: 0.0277 - rpn_bbox_loss: 1.0036 - mrcnn_class_loss: 0.2217 - mrcnn_bbox_loss: 0.7391 - mrcnn_mask_loss: 0.5882 state = deepcopy(state, memo) [128 128] use_multiprocessing=True, y = copier(x, memo) y = copier(x, memo) 81/100 [=======================>] - ETA: 10s - loss: 2.1212 - rpn_class_loss: 0.0235 - rpn_bbox_loss: 0.6438 - mrcnn_class_loss: 0.1587 - mrcnn_bbox_loss: 0.7182 - mrcnn_mask_loss: 0.5771