Python desktop application testing automation gui, mouse

Design philosophy coordinates and images

pyautogui does not need to parse the control structures of the platform, his positioning elements are based on coordinates. So whether you are measuring by hand shots, or obtained by automated tools, as long as you can get coordinates, you will be able to carry out elements of operation.

First, the mouse

1, to obtain the coordinates

import pyautogui as ui
# 获取屏幕大小 size = ui.size() # 获取现在鼠标位置 p = ui.position() # 坐标是否超出屏幕范围 if_on = ui.onScreen(*p) 

2, mouse movement

ui.moveTo(x/2, y/2, duration=2, tween=easeInCirc)

Parameter Description:

  • x, y coordinates
  • sustained duration in seconds, the default is instantaneous
  • tween effects, generally useless.

3, drag the mouse to move to the specified coordinates

ui.dragTo(500, 500)

4, crack shot archery game

import random
import time
import pyautogui as ui

x, y = ui.position()
target = (800, 800)

for i in range(10):
    rand_x = random.randint(0, x)
    rand_y = random.randint(0, y)
    # 随机生成位置
    print(rand_x, rand_y)
    ui.moveTo(rand_x, rand_y)
    # 移动到目标位置
    ui.dragTo(target, duration=0.2)
    time.sleep(1)

effect:

 

 

5, the relative movement

ui.move(-500, duration=1)
ui.move(yOffset=-400, duration=1)
ui.move(500, duration=1)
ui.move(yOffset=400, duration=1)

The relative movement of the game:

start = 20
add_point = 10 duration = 0.5 for i in range(10): if i % 2 == 0: ui.drag(start, duration=duration) ui.drag(yOffset=start, duration=duration) else: ui.drag(-start, duration=duration) ui.drag(yOffset=-start, duration=duration) start += add_point 

effect:

 

 

 

6. Click

ui.click(x=None, y=None, clicks=1, # 点击次数 interval=0.0, # 间隔时间 button='right', # 右键 duration=0.0) # 持续时间 

By encapsulating the further click leftClick, rightClick, middleClick, doubleClick, tripleClick

7, scroll

Scrolling window, but the package of rolling feel more tasteless, he is mouse clicks as a unit, so I do not know what will scroll position.

pyautogui.scroll(10) # 向上滚动 10 个 clicks >>> pyautogui.scroll(-10) # # 向下滚动 10 个 clicks >>> pyautogui.scroll(10, x=100, y=100) # 移动到位置再滚动 

Use drag and dragTo a little more convenient, or to coordinate as the basis to achieve operating window scroll through the middle mouse button, such as this example is to compare and drag the scroll:

x, y = ui.size() ui.scroll(-100) time.sleep(1) ui.scroll(100) time.sleep(1) ui.dragTo(y=y, button='middle') # 滚动到窗口底部 

effect:

 

 

 

 

Second, keyboard

1, input box

# 输入yuz, 每个字母时间间隔 0.2 s
pyautogui.write("yuz",interval=0.2) 

Note: pyautogui input box does not support auto-focus, first click on the input box position before all the entries.

2, press the keyboard press

press('enter', presses=1, interval=0.0)

Corresponding to the mouse click, enter keys on the keyboard, such as the shift key, Enter key. All buttons can view the source code among KEYBOARD_KEYS or KEY_NAMES.

parameter:

  • presses, the number of operation keys
  • interval, each keystroke interval

All buttons list:

 

 

 

3, hotkey hotkey

ui.hotkey('ctrl', 'shift', 'esc')

4, keyUp, keyDown

This is the decomposition of press action, the mouse equivalent of the mouseUp and mouseDown. Hotkey above operation can be decomposed into:

ui.keyDown('ctrl') # 按下 ctrl 
ui.keyDown('shift') # 按下 shift
ui.keyDown('esc') # 按下 esc
ui.keyUp('esc') # 释放 ctrl 
ui.keyUp('shift') # 释放 shift
ui.keyUp('ctrl') # 释放 esc

Third, the image recognition

Coordinate positioning this way laid the foundation for the versatile, pyautogui can easily do across platforms. But the exact location of the actual operation is difficult to remove aware of a control to be operated, because every time you open the same page are likely to change. Solution is very simple and crude pyautogui analysis, image recognition and returns the screen coordinate position in the coordinate processing.

1,locateCenterOnScreen

Returns the center coordinates of the identified image. Parameter Description:

  • You will pass parameters, image path;
  • confidence, recognition accuracy required to use install opencv;
  • Grayscale, gray levels, it is possible to enhance the recognition speed.
locateCenterOnScreen('img/seven.png', confidence=0.7, grayscale=True) 

Stage of the image recognition results are not satisfactory, there is a problem based on the use of image recognition:

  • Not recognize the specified element;

  • Recognition precision is not enough;

  • Find more slowly

  • Need to use heavy opencv library, perhaps you can try to switch to other libraries.

  • Need to be identified in advance of the picture, if the operating elements and more, manual material handling wonder of life.

So uiautogui of the scenes are interactive controls a small amount of native cross-platform, if you want to operate a large number of native application control, or switch to other more appropriate tools.

Specific examples based on the image recognition:

import time
import pyautogui as ui time.sleep(3) seven = ui.locateCenterOnScreen('img/seven.png', confidence=0.7, grayscale=True) mult = ui.locateCenterOnScreen('img/multipy.png', confidence=0.7, grayscale=True) two = ui.locateCenterOnScreen('img/two.png', confidence=0.7, grayscale=True) equal = ui.locateCenterOnScreen('img/equal.png', confidence=0.7, grayscale=True) ui.click(*seven) ui.click(*mult) ui.click(*two) ui.click(*equal) 

effect:

 

 

 

 

4, the latter can expect

Guess you like

Origin www.cnblogs.com/wangboyi/p/12660383.html