You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Python debugger cuts out in multi.py::duplicated() and I think the final error is somewhere in autogenerated cython hashtable bindings here? I'm not sure what to do to debug from multi.py onward.
I found #27035 (comment), that mentions missing preconditions check that might be related, but this is pure speculation on my part. Besides, tuples should be hashable.
Thanks for the report. The levels that you are setting are not compatible with the codes in the MultiIndex. The set_levels method only changes the levels, and they must be compatible with the codes. You can see this error by passing verify_integrity=True.
As such you wind up with a MultiIndex that has an invalid state, it is going to give you wrong answers.
If manipulating the index in a way that creates duplicates makes these methods return invalid results, then what is the purpose of is_unique or .duplicated()?
In any case what are the recommended steps here?
Basically I need to do something similar to the MRE for my df. Disable checks for it's index. Recalculate values. I know this will result in duplicates. Then I wanted to use pd.Index.duplicated() to drop those and bring the index back to valid state.
I didn't come up with this, I took it from the docs.
Right now I'm resetting and setting index as a workaround.
If manipulating the index in a way that creates duplicates makes these methods return invalid results
It's not that you are creating duplicates. You are disabling safety checks, and then passing invalid data. That allows the index to get into an invalid state.
Pandas version checks
I have checked that this issue has not already been reported.
I have confirmed this bug exists on the latest version of pandas.
I have confirmed this bug exists on the main branch of pandas.
Reproducible Example
Issue Description
Pandas does not detect multiindex duplicates that were created using
set_levels()
.MRE outputs:
Python debugger cuts out in
multi.py::duplicated()
and I think the final error is somewhere in autogenerated cython hashtable bindings here? I'm not sure what to do to debug from multi.py onward.I found #27035 (comment), that mentions missing preconditions check that might be related, but this is pure speculation on my part. Besides, tuples should be hashable.
Expected Behavior
Detect duplicates/non-uniqueness. MRE outputs:
Installed Versions
INSTALLED VERSIONS
commit : 3f7bc81
python : 3.12.7
python-bits : 64
OS : Linux
OS-release : 6.11.4-gentoo
Version : #1 SMP PREEMPT_DYNAMIC Tue Oct 22 20:38:14 CEST 2024
machine : x86_64
processor : AMD Ryzen 5 4500 6-Core Processor
byteorder : little
LC_ALL : None
LANG : en_IE.utf8
LOCALE : en_IE.UTF-8
pandas : 3.0.0.dev0+1654.g3f7bc81ae
numpy : 2.1.3
dateutil : 2.9.0.post0
pip : 24.2
Cython : None
sphinx : None
IPython : None
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : None
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pytz : 2024.2
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
The text was updated successfully, but these errors were encountered: