Fall in love with the python series-python performance (eight): __slots__ reduce memory usage

__slots__ is used to fix the attributes in the class. Once __slots__ is used, only the set attributes can be used instead of dynamically added

The class of __slots__ is not used. The dict is used when instantiating the object, and the used dict will be converted into a tuple. We all know that the tuple cannot be modified. Conducive to retrieve objects, so add the __slots__ class, no need to allocate and use dict

Next, we do an experiment to compare the number of bytes used by the added and unadded objects

Use __slots__

from pympler.asizeof import asizesof

class DaGongRen():#打工人
    __slots__ = ('id_dg','age','salary')
    def __init__(self,id_dg,age,salary):
        self.id_dg=id_dg
        self.age=age
	self.salary=salary

    #@profile
    def test():
        d=DaGongRen(10001,18,2000)
	#print(asizesof(d.__dict__))
        print(asizesof(d))
if __name__ == '__main__':
    test()

operation result:

[aspiree1431 opt]# python  lru_cache.py
(152,)

Without __slots__:

from pympler.asizeof import asizesof

class DaGongRen():#打工人
    #__slots__ = ('id_dg','age','salary')
    def __init__(self,id_dg,age,salary):
        self.id_dg=id_dg
        self.age=age
	self.salary=salary

    #@profile
    def test():
        d=DaGongRen(10001,18,2000)
	#print(asizesof(d.__dict__))
        print(asizesof(d))
if __name__ == '__main__':
    test()

operation result:

[aspiree1431 opt]# python  lru_cache.py
(416,)

We can see that after using __slots__, the object memory usage is still reduced.

The above only shows the memory used by the object. Next, I am going to compare the memory usage of the function after using __slots__:

Use __slots__:

from pympler.asizeof import asizesof

class DaGongRen():#打工人
    __slots__ = ('id_dg','age','salary')
    def __init__(self,id_dg,age,salary):
	self.id_dg=id_dg
	self.age=age
	self.salary=salary

    @profile
    def test():
	d=[ DaGongRen(10001,18,2000) for i in range(100000) ]
	#print(asizesof(d.__dict__))
	#print(asizesof(d))
if __name__ == '__main__':
    test()

operation result:

[aspiree1431 opt]# python -m memory_profiler lru_cache.py 
Filename: lru_cache.py

Line #    Mem usage    Increment  Occurences   Line Contents
============================================================
    32   33.656 MiB   33.656 MiB           1   @profile
    33                                         def test():
    34   40.809 MiB    7.152 MiB      100003            d=[ DaGongRen(10001,18,2000) for i in range(100000) ]

Without __slots__:

from pympler.asizeof import asizesof

class DaGongRen():#打工人
    #__slots__ = ('id_dg','age','salary')
    def __init__(self,id_dg,age,salary):
	self.id_dg=id_dg
	self.age=age
	self.salary=salary

    @profile
    def test():
	d=[ DaGongRen(10001,18,2000) for i in range(100000) ]
	#print(asizesof(d.__dict__))
	#print(asizesof(d))
if __name__ == '__main__':
    test()

operation result:

[aspiree1431 opt]# python -m memory_profiler lru_cache.py 
Filename: lru_cache.py

Line #    Mem usage    Increment  Occurences   Line Contents
============================================================
    32   33.926 MiB   33.926 MiB           1   @profile
    33                                         def test():
    34   50.301 MiB   16.375 MiB      100003            d=[ DaGongRen(10001,18,2000) for i in range(100000) ]

We can find that the reduction is still a lot, but it seems that the ratio is not higher when the above is just an object. This is because the object is only a part of the function. Take a simple example. If A has two stock trading accounts A1+A2, A1 basically remains unchanged, but the proportion is the highest, reaching 90%, and A2 needs to be adjusted, but the proportion is 10%. Now the market is very good, and A2 has tripled , But can A's total account be tripled?

Therefore, we are doing an experiment to reduce the number of iterations, and then we will find that there is no difference between using __slots__ and not using __slots__

The specific operation is to reduce the range (100000) in the above code by one 0

#这个是使用了的
[aspiree1431 opt]# python -m memory_profiler lru_cache.py 
Filename: lru_cache.py

Line #    Mem usage    Increment  Occurences   Line Contents
============================================================
    32   33.129 MiB   33.129 MiB           1   @profile
    33                                         def test():
    34   33.129 MiB    0.000 MiB           1            d=DaGongRen(2,18,2000)

#这个是没有使用了的
[aspiree1431 opt]# python -m memory_profiler lru_cache.py 
Filename: lru_cache.py

Line #    Mem usage    Increment  Occurences   Line Contents
============================================================
    32   33.344 MiB   33.344 MiB           1   @profile
    33                                         def test():
    34   33.344 MiB    0.000 MiB           1            d=DaGongRen(2,18,2000)

This experiment can show that it takes a lot of time to build an object to improve it. For functions only

 

Guess you like

Origin blog.csdn.net/zhou_438/article/details/109276710