In pandas, if the other data are all numeric types, pandas will automatically replace None with NaN, and can even invalidate the effect of s [s.isnull ()] = None, and s.replace (NaN, None) operation . At this time, you need to use the where function to replace.
None can be directly imported into the database as a null value, and errors will be reported when importing data containing NaN.
Many functions of numpy and pandas can handle NaN, but if you encounter None, you will get an error .
None and NaN cannot be handled by pandas' groupby function. Groups containing None or NaN are ignored.
In short, None comes with Python, and NaN is supported by numpy and pandas
Summary of equivalence comparison: (True indicates that it is judged to be equal)
In Dataframe and Series, you can use isna or notna to judge whether it is NAN
Due to the equivalence comparison, None and NaN are not consistent in each scenario, and relatively speaking, None is more stable.
In order not to cause yourself unnecessary trouble and extra memory burden. In practice, it is recommended to follow the following three principles:
1. In the process of processing data with pandas and numpy, None and NaN are unified into NaN in order to support more functions .
2. If you want to judge the equivalence of Series and numpy.array as a whole, use the special Series.equals and numpy.array functions to deal with it. Do not use == to judge
3. If you want to import data into the database, replace NaN with None
Reference link:
https://blog.csdn.net/weixin_43746235/article/details/86296140?utm_source=distribute.pc_relevant.none-task
https://blog.csdn.net/zn505119020/article/details/78530827
The method of NaN and None interchange:
https://www.jb51.net/article/149749.htm