Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the to_categorical in the data_utils.py #950

Open
wants to merge 61 commits into
base: 0.3.2
Choose a base branch
from
Open

Conversation

cassiePython
Copy link

I think it has a mistake in this function:
When I test the codes below from the Quickstart from tflearn :
//---------------------split line --------------------------------
import numpy as np
import tflearn

Download the Titanic dataset

from tflearn.datasets import titanic
titanic.download_dataset('titanic_dataset.csv')

Load CSV file, indicate that the first column represents labels

from tflearn.data_utils import load_csv
data, labels = load_csv('titanic_dataset.csv', target_column=0,
categorical_labels=True, n_classes=2)
//---------------------split line --------------------------------
the python IDE shows me the error message:
return (y[:, None] == np.unique(y)).astype(np.float32)
TypeError: list indices must be integers or slices, not tuple
I checked the source codes of the file "data_utils.py", the error was found in the function "to_categorical",
it should use "y = np.array(y)" before "return (y[:, None] == np.unique(y)).astype(np.float32) " because y should be a array rathen than a list.
tflean_data_utils

willduan and others added 27 commits June 18, 2017 17:50
* add contrastive loss

* add contrastive loss
The asarray function of numpy will keep ndarray's shape, so the original to_categorical function cannot return the right answer when y is a ndarray with 2-dimension or more. A reshape function is added for changing ndarray to 1-d array. And add a warning when the input array is 3-dimension or more.
* fix bidirectional_rnn working with TF 1.2 (resolves #818)

* fix bidirectional RNN, ensure backward compatibility
* add contrastive loss

* add contrastive loss

* Fix Using a  as a Python  is not allowed
tensorflow:tf.variable_op_scope(values, name, default_name) is
deprecated, use tf.variable_scope(name, default_name, values)
* Fix Grayscale Image Shape

When using grayscale images, making the shape [None, width, height, 1] will allow you to use it directly with an input_layer, since it's a 4-D tensor, as apposed to [None, width, height]. Color images are 4-D by default ([None, width, height, 3])

* Fix Grayscale Image Shape 

Changed the addition to work for all versions of python (tuple unpacking like that is only 3.x)

* Fix Typo

Added a paren
The code are so ugly, so I format it.
* [Docs] Convolution layers: Typo fixes

The markdown output should be less confused now.

* [Docs] Ftrl Proximal optimizer: Typo fix

Another missing `backtick`.
In the example, we point to the legacy seq2seq class in Tensorflow.

In summaries.py, the format of the tag has changed in Tensorflow; this breaks the seq2seq example, so we hotfix the particular tag that's a problem in the seq2seq example.
* Improve  to_categorical function

nb_classes not required anymore

* Refactoring examples
temporary add the older 'nb_classes' arg for older code compatibility.
plooney and others added 2 commits December 12, 2017 07:54
* Adding upscore3D layer

* Changing upscore2d to be consistent with upscore3d.
aymericdamien and others added 30 commits January 11, 2018 19:55
* Create VGG19.py

* Update README.md
* Renamed tflearn.losses to regularizers.

The functions within the renamed module are not losses but regularizers.

* Replaced logging.warn with logging.warning.

Logging.warn is now deprecated.

* Extracted method from duplicate code in dnn.py.
* initializations: lazily import xavier and variance scaling from tf.contrib

* variables: use vendored copy of tensorflow's add_arg_scope

* data_utils: replace VocabularyProcessor with a lazy-loading proxy
* fixed directory mismatch in cifar10 loaddata

* Update cifar10.py
"Truncating type '%s' not understood" should have 'truncating', not 'padding'
* Fixing termlogs for R2

When using R2 as metric, training step displays 'val_acc' instead of 'val_R2'

* Fixing termlogs for R2

While using R2 as metric, termlog displays 'val_acc' instead of 'val_R2'
hard sigmoid is faster to compute than sigmoid function
Now the regular training model saver won't delete the "best validation
accuracy" models.
GLUEs are nonconvex, nonmonotonic unlike ReLU or ELU.
Reference: Gaussian Error Linear Units (GELUs), Hendrycks et. al, 2018.
* add tf2 support

* cleanup

* update to 0.5.0
* add tf2 support

* cleanup

* update to 0.5.0

* fix & update setup
* Added fashion_mnist dataset

* Added triplet loss
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet