Back to Multiple platform build/check report for BioC 3.14 |
|
This page was generated on 2022-04-13 12:06:05 -0400 (Wed, 13 Apr 2022).
Hostname | OS | Arch (*) | R version | Installed pkgs |
---|---|---|---|---|
nebbiolo2 | Linux (Ubuntu 20.04.4 LTS) | x86_64 | 4.1.3 (2022-03-10) -- "One Push-Up" | 4324 |
tokay2 | Windows Server 2012 R2 Standard | x64 | 4.1.3 (2022-03-10) -- "One Push-Up" | 4077 |
machv2 | macOS 10.14.6 Mojave | x86_64 | 4.1.3 (2022-03-10) -- "One Push-Up" | 4137 |
Click on any hostname to see more info about the system (e.g. compilers) (*) as reported by 'uname -p', except on Windows and Mac OS X |
To the developers/maintainers of the VAExprs package: - Please allow up to 24 hours (and sometimes 48 hours) for your latest push to git@git.bioconductor.org:packages/VAExprs.git to reflect on this report. See How and When does the builder pull? When will my changes propagate? for more information. - Make sure to use the following settings in order to reproduce any error or warning you see on this page. |
Package 2034/2083 | Hostname | OS / Arch | INSTALL | BUILD | CHECK | BUILD BIN | ||||||||
VAExprs 1.0.1 (landing page) Dongmin Jung
| nebbiolo2 | Linux (Ubuntu 20.04.4 LTS) / x86_64 | OK | ERROR | skipped | |||||||||
tokay2 | Windows Server 2012 R2 Standard / x64 | OK | OK | OK | OK | |||||||||
machv2 | macOS 10.14.6 Mojave / x86_64 | OK | OK | OK | OK | |||||||||
Package: VAExprs |
Version: 1.0.1 |
Command: /home/biocbuild/bbs-3.14-bioc/R/bin/R CMD build --keep-empty-dirs --no-resave-data VAExprs |
StartedAt: 2022-04-12 06:10:38 -0400 (Tue, 12 Apr 2022) |
EndedAt: 2022-04-12 06:11:54 -0400 (Tue, 12 Apr 2022) |
EllapsedTime: 76.1 seconds |
RetCode: 1 |
Status: ERROR |
PackageFile: None |
PackageFileSize: NA |
############################################################################## ############################################################################## ### ### Running command: ### ### /home/biocbuild/bbs-3.14-bioc/R/bin/R CMD build --keep-empty-dirs --no-resave-data VAExprs ### ############################################################################## ############################################################################## * checking for file ‘VAExprs/DESCRIPTION’ ... OK * preparing ‘VAExprs’: * checking DESCRIPTION meta-information ... OK * installing the package to build vignettes * creating vignettes ... ERROR --- re-building ‘VAExprs.Rmd’ using rmarkdown 2022-04-12 06:11:09.446169: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/biocbuild/bbs-3.14-bioc/R/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server:/home/biocbuild/bbs-3.14-bioc/R/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server:/home/biocbuild/bbs-3.14-bioc/R/lib:/usr/local/lib:/usr/lib/x86_64-linux-gnu:/usr/lib/jvm/java-11-openjdk-amd64/lib/server 2022-04-12 06:11:09.446213: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. WARNING:tensorflow:OMP_NUM_THREADS is no longer used by the default Keras config. To configure the number of threads, use tf.config.threading APIs. 2022-04-12 06:11:21.643491: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set 2022-04-12 06:11:21.644882: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1 2022-04-12 06:11:21.703932: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2022-04-12 06:11:21.703982: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (nebbiolo2): /proc/driver/nvidia/version does not exist 2022-04-12 06:11:21.704523: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-04-12 06:11:21.707923: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set 2022-04-12 06:11:21.738084: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes) 2022-04-12 06:11:21.744073: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2200000000 Hz Train on 300 samples, validate on 300 samples Epoch 1/100 32/300 [==>...........................] - ETA: 2s - loss: 695.0123 300/300 [==============================] - ETA: 0s - loss: 660.6599 300/300 [==============================] - 0s 2ms/sample - loss: 660.6599 - val_loss: 615.1421 Epoch 2/100 32/300 [==>...........................] - ETA: 0s - loss: 617.0280 300/300 [==============================] - ETA: 0s - loss: 608.5843 300/300 [==============================] - 0s 376us/sample - loss: 608.5843 - val_loss: 602.7748 Epoch 3/100 32/300 [==>...........................] - ETA: 0s - loss: 599.4803 300/300 [==============================] - 0s 261us/sample - loss: 601.5733 - val_loss: 601.1372 Epoch 4/100 32/300 [==>...........................] - ETA: 0s - loss: 602.6429 300/300 [==============================] - 0s 245us/sample - loss: 601.2843 - val_loss: 603.2950 Epoch 5/100 32/300 [==>...........................] - ETA: 0s - loss: 601.3651 300/300 [==============================] - 0s 261us/sample - loss: 599.8374 - val_loss: 599.5879 Epoch 6/100 32/300 [==>...........................] - ETA: 0s - loss: 599.7968 300/300 [==============================] - 0s 277us/sample - loss: 599.2142 - val_loss: 598.6806 Epoch 7/100 32/300 [==>...........................] - ETA: 0s - loss: 595.5739 300/300 [==============================] - ETA: 0s - loss: 597.7195 300/300 [==============================] - 0s 308us/sample - loss: 597.7195 - val_loss: 596.7556 Epoch 8/100 32/300 [==>...........................] - ETA: 0s - loss: 597.3861 300/300 [==============================] - ETA: 0s - loss: 597.3125 300/300 [==============================] - 0s 308us/sample - loss: 597.3125 - val_loss: 594.6665 Epoch 9/100 32/300 [==>...........................] - ETA: 0s - loss: 593.7170 288/300 [===========================>..] - ETA: 0s - loss: 594.0556 300/300 [==============================] - 0s 378us/sample - loss: 594.0888 - val_loss: 591.0117 Epoch 10/100 32/300 [==>...........................] - ETA: 0s - loss: 591.2639 300/300 [==============================] - ETA: 0s - loss: 590.5604 300/300 [==============================] - 0s 304us/sample - loss: 590.5604 - val_loss: 589.1824 Epoch 11/100 32/300 [==>...........................] - ETA: 0s - loss: 590.3290 288/300 [===========================>..] - ETA: 0s - loss: 589.0400 300/300 [==============================] - 0s 338us/sample - loss: 589.1357 - val_loss: 588.1579 Epoch 12/100 32/300 [==>...........................] - ETA: 0s - loss: 591.6268 300/300 [==============================] - 0s 296us/sample - loss: 587.6177 - val_loss: 586.6503 Epoch 13/100 32/300 [==>...........................] - ETA: 0s - loss: 584.3870 288/300 [===========================>..] - ETA: 0s - loss: 585.9706 300/300 [==============================] - 0s 337us/sample - loss: 585.9434 - val_loss: 584.6153 Epoch 14/100 32/300 [==>...........................] - ETA: 0s - loss: 586.4080 300/300 [==============================] - ETA: 0s - loss: 584.0884 300/300 [==============================] - 0s 317us/sample - loss: 584.0884 - val_loss: 582.7133 Epoch 15/100 32/300 [==>...........................] - ETA: 0s - loss: 584.8075 300/300 [==============================] - ETA: 0s - loss: 583.3200 300/300 [==============================] - 0s 297us/sample - loss: 583.3200 - val_loss: 581.4463 Epoch 16/100 32/300 [==>...........................] - ETA: 0s - loss: 581.0640 300/300 [==============================] - 0s 311us/sample - loss: 581.7065 - val_loss: 580.4421 Epoch 17/100 32/300 [==>...........................] - ETA: 0s - loss: 580.0742 256/300 [========================>.....] - ETA: 0s - loss: 580.7148 300/300 [==============================] - 0s 361us/sample - loss: 580.7329 - val_loss: 579.9194 Epoch 18/100 32/300 [==>...........................] - ETA: 0s - loss: 581.2576 300/300 [==============================] - ETA: 0s - loss: 580.1616 300/300 [==============================] - 0s 328us/sample - loss: 580.1616 - val_loss: 579.3833 Epoch 19/100 32/300 [==>...........................] - ETA: 0s - loss: 577.3789 288/300 [===========================>..] - ETA: 0s - loss: 579.7006 300/300 [==============================] - 0s 321us/sample - loss: 579.7140 - val_loss: 579.2953 Epoch 20/100 32/300 [==>...........................] - ETA: 0s - loss: 578.4491 300/300 [==============================] - 0s 291us/sample - loss: 579.6094 - val_loss: 579.2088 Epoch 21/100 32/300 [==>...........................] - ETA: 0s - loss: 579.9298 288/300 [===========================>..] - ETA: 0s - loss: 579.7697 300/300 [==============================] - 0s 325us/sample - loss: 579.6036 - val_loss: 579.3953 Epoch 22/100 32/300 [==>...........................] - ETA: 0s - loss: 578.8207 300/300 [==============================] - ETA: 0s - loss: 579.9964 300/300 [==============================] - 0s 301us/sample - loss: 579.9964 - val_loss: 579.5417 Epoch 23/100 32/300 [==>...........................] - ETA: 0s - loss: 579.4367 300/300 [==============================] - ETA: 0s - loss: 579.5066 300/300 [==============================] - 0s 332us/sample - loss: 579.5066 - val_loss: 579.4829 Epoch 24/100 32/300 [==>...........................] - ETA: 0s - loss: 575.9515 288/300 [===========================>..] - ETA: 0s - loss: 579.5531 300/300 [==============================] - 0s 366us/sample - loss: 579.5465 - val_loss: 579.8302 Epoch 25/100 32/300 [==>...........................] - ETA: 0s - loss: 580.0435 300/300 [==============================] - ETA: 0s - loss: 580.9922 300/300 [==============================] - 0s 300us/sample - loss: 580.9922 - val_loss: 580.1515 Epoch 26/100 32/300 [==>...........................] - ETA: 0s - loss: 578.0277 288/300 [===========================>..] - ETA: 0s - loss: 579.7832 300/300 [==============================] - 0s 323us/sample - loss: 579.9080 - val_loss: 579.2345 Epoch 27/100 32/300 [==>...........................] - ETA: 0s - loss: 580.1425 288/300 [===========================>..] - ETA: 0s - loss: 579.4595 300/300 [==============================] - 0s 351us/sample - loss: 579.4142 - val_loss: 578.8008 Epoch 28/100 32/300 [==>...........................] - ETA: 0s - loss: 578.0876 288/300 [===========================>..] - ETA: 0s - loss: 579.3245 300/300 [==============================] - 0s 340us/sample - loss: 579.1497 - val_loss: 578.8971 Epoch 29/100 32/300 [==>...........................] - ETA: 0s - loss: 576.7809 300/300 [==============================] - ETA: 0s - loss: 579.3563 300/300 [==============================] - 0s 307us/sample - loss: 579.3563 - val_loss: 578.7053 Epoch 30/100 32/300 [==>...........................] - ETA: 0s - loss: 578.3322 300/300 [==============================] - ETA: 0s - loss: 578.8109 300/300 [==============================] - 0s 290us/sample - loss: 578.8109 - val_loss: 579.3028 Epoch 31/100 32/300 [==>...........................] - ETA: 0s - loss: 575.9546 300/300 [==============================] - ETA: 0s - loss: 579.2258 300/300 [==============================] - 0s 296us/sample - loss: 579.2258 - val_loss: 581.1687 Epoch 32/100 32/300 [==>...........................] - ETA: 0s - loss: 579.2332 300/300 [==============================] - 0s 283us/sample - loss: 580.7144 - val_loss: 579.8112 Epoch 33/100 32/300 [==>...........................] - ETA: 0s - loss: 581.6672 300/300 [==============================] - ETA: 0s - loss: 579.2022 300/300 [==============================] - 0s 322us/sample - loss: 579.2022 - val_loss: 578.7181 Epoch 34/100 32/300 [==>...........................] - ETA: 0s - loss: 577.6627 300/300 [==============================] - 0s 291us/sample - loss: 578.7545 - val_loss: 579.8445 Epoch 35/100 32/300 [==>...........................] - ETA: 0s - loss: 580.0080 300/300 [==============================] - ETA: 0s - loss: 579.1962 300/300 [==============================] - 0s 323us/sample - loss: 579.1962 - val_loss: 578.3012 Epoch 36/100 32/300 [==>...........................] - ETA: 0s - loss: 576.3524 300/300 [==============================] - 0s 279us/sample - loss: 578.8937 - val_loss: 578.3208 Epoch 37/100 32/300 [==>...........................] - ETA: 0s - loss: 579.2126 300/300 [==============================] - ETA: 0s - loss: 578.7006 300/300 [==============================] - 0s 344us/sample - loss: 578.7006 - val_loss: 578.2462 Epoch 38/100 32/300 [==>...........................] - ETA: 0s - loss: 577.1388 300/300 [==============================] - 0s 300us/sample - loss: 578.7199 - val_loss: 578.2157 Epoch 39/100 32/300 [==>...........................] - ETA: 0s - loss: 578.6893 288/300 [===========================>..] - ETA: 0s - loss: 578.7408 300/300 [==============================] - 0s 316us/sample - loss: 578.7550 - val_loss: 578.5634 Epoch 40/100 32/300 [==>...........................] - ETA: 0s - loss: 577.2725 300/300 [==============================] - 0s 290us/sample - loss: 578.9247 - val_loss: 578.4557 Epoch 41/100 32/300 [==>...........................] - ETA: 0s - loss: 577.4936 300/300 [==============================] - 0s 287us/sample - loss: 578.7099 - val_loss: 578.5476 Epoch 42/100 32/300 [==>...........................] - ETA: 0s - loss: 579.2753 300/300 [==============================] - ETA: 0s - loss: 578.4906 300/300 [==============================] - 0s 300us/sample - loss: 578.4906 - val_loss: 578.4037 Epoch 43/100 32/300 [==>...........................] - ETA: 0s - loss: 578.4788 288/300 [===========================>..] - ETA: 0s - loss: 578.6267 300/300 [==============================] - 0s 311us/sample - loss: 578.7353 - val_loss: 578.2518 Epoch 44/100 32/300 [==>...........................] - ETA: 0s - loss: 576.8581 300/300 [==============================] - ETA: 0s - loss: 578.7802 300/300 [==============================] - 0s 301us/sample - loss: 578.7802 - val_loss: 578.5417 Epoch 45/100 32/300 [==>...........................] - ETA: 0s - loss: 579.7577 300/300 [==============================] - ETA: 0s - loss: 578.5289 300/300 [==============================] - 0s 312us/sample - loss: 578.5289 - val_loss: 577.9622 Epoch 46/100 32/300 [==>...........................] - ETA: 0s - loss: 576.1993 300/300 [==============================] - 0s 274us/sample - loss: 578.6074 - val_loss: 578.5393 Epoch 47/100 32/300 [==>...........................] - ETA: 0s - loss: 578.2775 300/300 [==============================] - 0s 281us/sample - loss: 578.2937 - val_loss: 578.0631 Epoch 48/100 32/300 [==>...........................] - ETA: 0s - loss: 577.9498 288/300 [===========================>..] - ETA: 0s - loss: 578.6202 300/300 [==============================] - 1s 3ms/sample - loss: 578.5509 - val_loss: 578.2229 Epoch 49/100 32/300 [==>...........................] - ETA: 0s - loss: 578.5522 256/300 [========================>.....] - ETA: 0s - loss: 578.3264 300/300 [==============================] - 0s 359us/sample - loss: 578.5429 - val_loss: 577.9526 Epoch 50/100 32/300 [==>...........................] - ETA: 0s - loss: 577.5665 300/300 [==============================] - ETA: 0s - loss: 578.2349 300/300 [==============================] - 0s 334us/sample - loss: 578.2349 - val_loss: 577.9265 Epoch 51/100 32/300 [==>...........................] - ETA: 0s - loss: 576.4781 288/300 [===========================>..] - ETA: 0s - loss: 578.2982 300/300 [==============================] - 0s 356us/sample - loss: 578.2195 - val_loss: 578.0465 Epoch 52/100 32/300 [==>...........................] - ETA: 0s - loss: 577.3842 300/300 [==============================] - ETA: 0s - loss: 578.6264 300/300 [==============================] - 0s 308us/sample - loss: 578.6264 - val_loss: 578.6533 Epoch 53/100 32/300 [==>...........................] - ETA: 0s - loss: 579.4321 300/300 [==============================] - ETA: 0s - loss: 578.4895 300/300 [==============================] - 0s 306us/sample - loss: 578.4895 - val_loss: 578.3741 Epoch 54/100 32/300 [==>...........................] - ETA: 0s - loss: 580.5452 300/300 [==============================] - ETA: 0s - loss: 578.7046 300/300 [==============================] - 0s 299us/sample - loss: 578.7046 - val_loss: 578.7139 Epoch 55/100 32/300 [==>...........................] - ETA: 0s - loss: 577.3436 300/300 [==============================] - ETA: 0s - loss: 578.3036 300/300 [==============================] - 0s 323us/sample - loss: 578.3036 - val_loss: 577.6968 Epoch 56/100 32/300 [==>...........................] - ETA: 0s - loss: 577.8230 300/300 [==============================] - ETA: 0s - loss: 578.1954 300/300 [==============================] - 0s 321us/sample - loss: 578.1954 - val_loss: 577.9015 Epoch 57/100 32/300 [==>...........................] - ETA: 0s - loss: 581.5431 300/300 [==============================] - ETA: 0s - loss: 578.1802 300/300 [==============================] - 0s 297us/sample - loss: 578.1802 - val_loss: 577.9089 Epoch 58/100 32/300 [==>...........................] - ETA: 0s - loss: 578.2073 300/300 [==============================] - 0s 286us/sample - loss: 577.9259 - val_loss: 577.7045 Epoch 59/100 32/300 [==>...........................] - ETA: 0s - loss: 578.4598 288/300 [===========================>..] - ETA: 0s - loss: 577.8953 300/300 [==============================] - 0s 319us/sample - loss: 577.9342 - val_loss: 577.4765 Epoch 60/100 32/300 [==>...........................] - ETA: 0s - loss: 577.1696 300/300 [==============================] - ETA: 0s - loss: 578.1554 300/300 [==============================] - 0s 303us/sample - loss: 578.1554 - val_loss: 577.6761 Epoch 61/100 32/300 [==>...........................] - ETA: 0s - loss: 579.1683 300/300 [==============================] - 0s 272us/sample - loss: 577.9310 - val_loss: 577.5173 Epoch 62/100 32/300 [==>...........................] - ETA: 0s - loss: 576.1322 300/300 [==============================] - ETA: 0s - loss: 577.7010 300/300 [==============================] - 0s 285us/sample - loss: 577.7010 - val_loss: 578.0153 Epoch 63/100 32/300 [==>...........................] - ETA: 0s - loss: 576.0369 300/300 [==============================] - 0s 273us/sample - loss: 577.8886 - val_loss: 577.9256 Epoch 64/100 32/300 [==>...........................] - ETA: 0s - loss: 577.1699 300/300 [==============================] - ETA: 0s - loss: 578.2474 300/300 [==============================] - 0s 292us/sample - loss: 578.2474 - val_loss: 577.9450 Epoch 65/100 32/300 [==>...........................] - ETA: 0s - loss: 581.2871 300/300 [==============================] - 0s 262us/sample - loss: 578.0415 - val_loss: 577.4459 Epoch 66/100 32/300 [==>...........................] - ETA: 0s - loss: 577.7867 288/300 [===========================>..] - ETA: 0s - loss: 577.5219 300/300 [==============================] - 0s 320us/sample - loss: 577.5787 - val_loss: 577.5474 Epoch 67/100 32/300 [==>...........................] - ETA: 0s - loss: 577.6685 300/300 [==============================] - ETA: 0s - loss: 577.8698 300/300 [==============================] - 0s 305us/sample - loss: 577.8698 - val_loss: 577.7022 Epoch 68/100 32/300 [==>...........................] - ETA: 0s - loss: 578.5612 300/300 [==============================] - 0s 291us/sample - loss: 577.5845 - val_loss: 577.4463 Epoch 69/100 32/300 [==>...........................] - ETA: 0s - loss: 578.6013 288/300 [===========================>..] - ETA: 0s - loss: 578.0499 300/300 [==============================] - 0s 309us/sample - loss: 577.8672 - val_loss: 577.4521 Epoch 70/100 32/300 [==>...........................] - ETA: 0s - loss: 577.2272 300/300 [==============================] - 0s 311us/sample - loss: 577.6783 - val_loss: 577.2854 Epoch 71/100 32/300 [==>...........................] - ETA: 0s - loss: 575.8434 300/300 [==============================] - ETA: 0s - loss: 577.5166 300/300 [==============================] - 0s 294us/sample - loss: 577.5166 - val_loss: 577.0156 Epoch 72/100 32/300 [==>...........................] - ETA: 0s - loss: 576.3799 300/300 [==============================] - 0s 293us/sample - loss: 577.9002 - val_loss: 577.2207 Epoch 73/100 32/300 [==>...........................] - ETA: 0s - loss: 578.0160 288/300 [===========================>..] - ETA: 0s - loss: 577.8246 300/300 [==============================] - 0s 312us/sample - loss: 577.8347 - val_loss: 578.0924 Epoch 74/100 32/300 [==>...........................] - ETA: 0s - loss: 577.3498 288/300 [===========================>..] - ETA: 0s - loss: 577.6512 300/300 [==============================] - 0s 314us/sample - loss: 577.7352 - val_loss: 577.5734 Epoch 75/100 32/300 [==>...........................] - ETA: 0s - loss: 579.6338 300/300 [==============================] - 0s 289us/sample - loss: 578.0444 - val_loss: 577.2284 Epoch 76/100 32/300 [==>...........................] - ETA: 0s - loss: 575.1534 300/300 [==============================] - ETA: 0s - loss: 577.5870 300/300 [==============================] - 0s 283us/sample - loss: 577.5870 - val_loss: 577.2396 Epoch 77/100 32/300 [==>...........................] - ETA: 0s - loss: 578.3832 300/300 [==============================] - ETA: 0s - loss: 577.5007 300/300 [==============================] - 0s 323us/sample - loss: 577.5007 - val_loss: 577.1855 Epoch 78/100 32/300 [==>...........................] - ETA: 0s - loss: 578.0580 300/300 [==============================] - ETA: 0s - loss: 577.4889 300/300 [==============================] - 0s 294us/sample - loss: 577.4889 - val_loss: 577.1653 Epoch 79/100 32/300 [==>...........................] - ETA: 0s - loss: 575.6101 300/300 [==============================] - 0s 344us/sample - loss: 577.5352 - val_loss: 577.2803 Epoch 80/100 32/300 [==>...........................] - ETA: 0s - loss: 577.7758 300/300 [==============================] - ETA: 0s - loss: 577.9124 300/300 [==============================] - 0s 306us/sample - loss: 577.9124 - val_loss: 577.6696 Epoch 81/100 32/300 [==>...........................] - ETA: 0s - loss: 576.2922 300/300 [==============================] - ETA: 0s - loss: 577.5339 300/300 [==============================] - 0s 452us/sample - loss: 577.5339 - val_loss: 577.7261 Epoch 1/100 1/3 [=========>....................] - ETA: 1s - batch: 0.0000e+00 - size: 32.0000 - loss: 13588.3965 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: 13470.9775 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: 2294625293141852028928.0000 3/3 [==============================] - 1s 90ms/step - batch: 1.0000 - size: 32.0000 - loss: 2294625293141852028928.0000 Epoch 2/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 112ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 3/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan /usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py:2325: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically. warnings.warn('`Model.state_updates` will be removed in a future version. ' /usr/local/lib/python3.8/dist-packages/tensorflow/python/keras/engine/training.py:2325: UserWarning: `Model.state_updates` will be removed in a future version. This property should not be used in TensorFlow 2.0, as `updates` are applied automatically. warnings.warn('`Model.state_updates` will be removed in a future version. ' WARNING:tensorflow:Callback method `on_train_batch_begin` is slow compared to the batch time (batch time: 0.0333s vs `on_train_batch_begin` time: 0.0509s). Check your callbacks. 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 1s 478ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 4/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 63ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 5/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 67ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 6/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 65ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 7/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 86ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 8/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 79ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 9/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 72ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 10/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 74ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 11/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 96ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 12/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 69ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 13/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 78ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 14/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 68ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 15/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 87ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 16/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 74ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 17/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 63ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 18/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 69ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 19/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 1s 413ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 20/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 68ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Epoch 21/100 1/3 [=========>....................] - ETA: 0s - batch: 0.0000e+00 - size: 32.0000 - loss: nan 2/3 [===================>..........] - ETA: 0s - batch: 0.5000 - size: 32.0000 - loss: nan 3/3 [==============================] - ETA: 0s - batch: 1.0000 - size: 32.0000 - loss: nan 3/3 [==============================] - 0s 124ms/step - batch: 1.0000 - size: 32.0000 - loss: nan Quitting from lines 205-214 (VAExprs.Rmd) Error: processing vignette 'VAExprs.Rmd' failed with diagnostics: NA/NaN/Inf in foreign function call (arg 1) --- failed re-building ‘VAExprs.Rmd’ SUMMARY: processing the following file failed: ‘VAExprs.Rmd’ Error: Vignette re-building failed. Execution halted