-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
BUG: DataFrame.convert_dtypes fails on column that is already "string" dtype #31731
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@DAVIDWALES thanks for the report. I would appreciate you don't call something "completely broken", but for sure you discovered a bug. But there are several issues here: First, calling
It's bizarre where those bytes come from, at first. But this seems a bug in the
The fact those those becomes bytes is also the reason that you see "object" as dtype after the second time you call convert_dtypes (as bytes are python objects stored in object dtype). When calling Secondly:
Although surprising, this is the expected behaviour. In general, when you assign to You will see the same with other dtypes as well: In [24]: df = pd.DataFrame({'a': pd.array(['a', 'b'], dtype="string"), 'b': [1, 2]})
In [26]: df.dtypes
Out[26]:
a string
b int64
dtype: object
# in the default dtype inference, a string is stored in object dtype
In [27]: df["a"] = "other"
In [28]: df.dtypes
Out[28]:
a object
b int64
dtype: object
# assigning a timestamp will give datetime64 dtype
In [29]: df["b"] = pd.Timestamp("2012-01-01")
In [31]: df.dtypes
Out[31]:
a object
b datetime64[ns]
dtype: object So you will see this phenomenon with all dtypes. I agree it is "extra" surprising here, because with the string you are actually assigning a value that matches the dtype of the existing column, but where the default dtype inference still uses the old "object" dtype. |
@jorisvandenbossche Sorry for the dramatic title! Thanks for fixing it. :) Thanks for the explanation. Does this mean that the best way to add a new string column consisting of repeated values is something like this? In [2]: df = pd.DataFrame({'A': ['a', 'b', 'c']}, dtype='string')
In [3]: df
Out[3]:
A
0 a
1 b
2 c
In [4]: df.dtypes
Out[4]:
A string
dtype: object
In [5]: df['A'] = pd.Series(['test'] * len(df), dtype='string')
In [6]: df
Out[6]:
A
0 test
1 test
2 test
In [7]: df.dtypes
Out[7]:
A string
dtype: object Is there a more straightforward way to express this, in the spirit of |
I am afraid not. It's certainly not ideal (and you example above actually also needs to pass df.index to Series(..) in case you have a non-default index), but it's a problem in general, only aggravated by the fact that "string" dtype is not yet the default dtype. In the case when you are overwriting an existing column, we could special case this, but for the general case of assigning a new column with a string, you would still have the same problem. If you want, feel free to open a new issue with the example in your last comment (and so we can keep this issue for the string to bytes bug) |
I've added another issue to consider better ways to assign scalars to nullable columns: #31763 |
I don't know if this is a separate bug, but I get Code Sampledf = pd.DataFrame({'A': ['ä', 'ö', 'ü'], 'B': ['d', 'e', 'f']})
df1 = df.convert_dtypes()
df2 = df1.convert_dtypes() Full traceback
Output of
|
Yes, that's probably the same issue. Thanks for reporting! |
Code Sample, a copy-pastable example if possible
Problem description
The documentation for DataFrame.convert_dtypes() claims that it will 'Convert columns to best possible dtypes using dtypes supporting pd.NA'. However, this does not appear to be the case. As you can see in
Out[6]
above, if the dtypes are already optimal, they will be converted back to objects.Even worse, if some of the dtypes are optimal, but other dtypes are objects, they will be switched -- the optimal dtypes will become objects, and the objects will become optimal.
Some mysterious additional details:
Out[9]
)df.convert_dtypes()
will not only cause the string column to become object and the object column to become string, it will also change the strings in the newly created object column into bytes (Out[11]
)Expected Output
Anything which can be converted to one of the new nullable datatypes would be converted to one of the new nullable datatypes. Anything which is already a nullable datatype would remain as it is.
Output of
pd.show_versions()
INSTALLED VERSIONS
commit : None
python : 3.7.5.final.0
python-bits : 64
OS : Windows
OS-release : 10
machine : AMD64
processor : Intel64 Family 6 Model 78 Stepping 3, GenuineIntel
byteorder : little
LC_ALL : None
LANG : None
LOCALE : None.None
pandas : 1.0.0
numpy : 1.18.1
pytz : 2019.3
dateutil : 2.8.1
pip : 20.0.2
setuptools : 45.1.0.post20200127
Cython : 0.29.14
pytest : 5.3.4
hypothesis : 4.54.2
sphinx : 2.3.1
blosc : None
feather : None
xlsxwriter : 1.2.7
lxml.etree : 4.4.2
html5lib : 1.0.1
pymysql : None
psycopg2 : 2.8.4 (dt dec pq3 ext lo64)
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : 4.8.2
bottleneck : 1.3.1
fastparquet : None
gcsfs : None
lxml.etree : 4.4.2
matplotlib : 3.1.2
numexpr : 2.7.0
odfpy : None
openpyxl : 3.0.3
pandas_gbq : None
pyarrow : None
pytables : None
pytest : 5.3.4
pyxlsb : None
s3fs : None
scipy : 1.3.2
sqlalchemy : 1.3.13
tables : 3.6.1
tabulate : None
xarray : 0.14.1
xlrd : 1.2.0
xlwt : 1.3.0
xlsxwriter : 1.2.7
numba : 0.48.0
The text was updated successfully, but these errors were encountered: