[Shandong University Conference] Application Setting Module

preamble

In this article, I will introduce my design for the setting page of the Shanda conference client.

the whole frame

The entire setting module is encapsulated in a Settingmodule , which will Modalbe displayed to the user in the form of a modal screen in the client. Its overall structure is divided into four parts:

  • general settings
  • audio and video equipment
  • Participation status
  • about

Each part is subdivided into independent modules for easy maintenance.
Overview of the overall structure

general settings

Let's first introduce the general settings module, which is responsible for managing some general functions of the application. This includes whether to automatically log in when starting the app , whether to allow the app to start automatically when booting up , and whether to enable encryption for private video calls .
The module code for the entire general setup is as follows:

import {
    
     AlertOutlined, LogoutOutlined, QuestionCircleFilled } from '@ant-design/icons';
import {
    
     Button, Checkbox, Modal, Tooltip } from 'antd';
import React, {
    
     useEffect, useState } from 'react';
import {
    
     getMainContent } from 'Utils/Global';
import {
    
     eWindow } from 'Utils/Types';

export default function General() {
    
    
	const [autoLogin, setAutoLogin] = useState(localStorage.getItem('autoLogin') === 'true');
	const [autoOpen, setAutoOpen] = useState(false);
	const [securityPrivateWebrtc, setSecurityPrivateWebrtc] = useState(
		localStorage.getItem('securityPrivateWebrtc') === 'true'
	);
	useEffect(() => {
    
    
		eWindow.ipc.invoke('GET_OPEN_AFTER_START_STATUS').then((status: boolean) => {
    
    
			setAutoOpen(status);
		});
	}, []);

	return (
		<>
			<div>
				<Checkbox
					checked={
    
    autoLogin}
					onChange={
    
    (e) => {
    
    
						setAutoLogin(e.target.checked);
						localStorage.setItem('autoLogin', `${
      
      e.target.checked}`);
					}}>
					自动登录
				</Checkbox>
			</div>
			<div>
				<Checkbox
					checked={
    
    autoOpen}
					onChange={
    
    (e) => {
    
    
						setAutoOpen(e.target.checked);
						eWindow.ipc.send('EXCHANGE_OPEN_AFTER_START_STATUS', e.target.checked);
					}}>
					开机时启动
				</Checkbox>
			</div>
			<div style={
    
    {
    
     display: 'flex' }}>
				<Checkbox
					checked={
    
    securityPrivateWebrtc}
					onChange={
    
    (e) => {
    
    
						if (e.target.checked) {
    
    
							Modal.confirm({
    
    
								icon: <AlertOutlined />,
								content:
									'开启加密会大幅度提高客户端的CPU占用,请再三确认是否需要开启该功能!',
								cancelText: '暂不开启',
								okText: '确认开启',
								onCancel: () => {
    
    },
								onOk: () => {
    
    
									setSecurityPrivateWebrtc(true);
									localStorage.setItem('securityPrivateWebrtc', `${
      
      true}`);
								},
							});
						} else {
    
    
							setSecurityPrivateWebrtc(false);
							localStorage.setItem('securityPrivateWebrtc', `${
      
      false}`);
						}
					}}>
					私人加密通话
				</Checkbox>
				<Tooltip placement='right' overlay={
    
    '开启加密会大幅度提高CPU占用且不会开启GPU加速'}>
					<QuestionCircleFilled style={
    
    {
    
     color: 'gray', transform: 'translateY(25%)' }} />
				</Tooltip>
			</div>
			<div style={
    
    {
    
     marginTop: '5px' }}>
				<Button
					icon={
    
    <LogoutOutlined />}
					danger
					type='primary'
					onClick={
    
    () => {
    
    
						Modal.confirm({
    
    
							title: '注销',
							content: '你确定要退出当前用户登录吗?',
							icon: <LogoutOutlined />,
							cancelText: '取消',
							okText: '确认',
							okButtonProps: {
    
    
								danger: true,
							},
							onOk: () => {
    
    
								eWindow.ipc.send('LOG_OUT');
							},
							getContainer: getMainContent,
						});
					}}>
					退出登录
				</Button>
			</div>
		</>
	);
}

Among them, the automatic login function is relatively simple to implement, and I will focus on the realization of the boot-up self-start function.

start at boot

To realize this function, the user's registration form needs to be modified. The front end does not have the ability to modify the user registry, so we need to call the Node.js module through electron to realize the operation of the user registry. In the main process part of electron , we add the following event handler for ipcMain :

const {
    
     app } = require('electron');
const ipc = require('electron').ipcMain;
const cp = require('child_process');

ipc.on('EXCHANGE_OPEN_AFTER_START_STATUS', (evt, openAtLogin) => {
    
    
	if (app.isPackaged) {
    
    
		if (openAtLogin) {
    
    
			cp.exec(
				`REG ADD HKLM\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Run /v SduMeeting /t REG_SZ /d "${
      
      process.execPath}" /f`,
				(err) => {
    
    
					console.log(err);
				}
			);
		} else {
    
    
			cp.exec(
				`REG DELETE HKLM\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Run /v SduMeeting /f`,
				(err) => {
    
    
					console.log(err);
				}
			);
		}
	}
});

ipc.handle('GET_OPEN_AFTER_START_STATUS', () => {
    
    
	return new Promise((resolve) => {
    
    
		cp.exec(
			`REG QUERY HKLM\\SOFTWARE\\WOW6432Node\\Microsoft\\Windows\\CurrentVersion\\Run /v SduMeeting`,
			(err, stdout, stderr) => {
    
    
				if (err) {
    
    
					resolve(false);
				}
				resolve(stdout.indexOf('SduMeeting') >= 0);
			}
		);
	});
});

The two event handles correspond to modifying the boot state and obtaining the boot state respectively . By calling the Node.js child_processmodule and using the COMMAND statement, we realized the addition, deletion, modification, and query of the registry on the Windows system, and thus realized the ability to modify the self-starting of the application when it is turned on.
It should be noted that in the production environment, since modifying the registry requires administrator privileges , it is necessary to apply for administrator privileges for the application when packaging. Since I am using electron-packagerfor packaging, I need to add one more parameter to the packaging command when packaging --win32metadata.requested-execution-level=requireAdministrator.

audio and video equipment

Since the purpose of this project is to allow multiple users to conduct video conferences online, we must maintain the processing of audio and video equipment for users. In order to facilitate maintenance, I split the audio device and video device into two modules for management, on which there is a multimedia device module responsible for managing shared data (such as the current list of multimedia devices and the device Id currently in use ).

Multimedia Devices (MediaDevices.tsx)

In this module, we first need to extract all the multimedia devices connected to the user's current device. To achieve this, we can use the content in our previous article [Shandong University Conference] User Media Acquisition Based on WebRTC .
Let's first implement a function to obtain the user's multimedia device:

/**
 * 获取用户多媒体设备
 */
function getUserMediaDevices() {
    
    
	return new Promise((resolve, reject) => {
    
    
		try {
    
    
			navigator.mediaDevices.enumerateDevices().then((devices) => {
    
    
				const generateDeviceJson = (device: MediaDeviceInfo) => {
    
    
					const formerIndex = device.label.indexOf(' (');
					const latterIndex = device.label.lastIndexOf(' (');
					const {
    
     label, webLabel } = ((label, deviceId) => {
    
    
						switch (deviceId) {
    
    
							case 'default':
								return {
    
    
									label: label.replace('Default - ', ''),
									webLabel: label.replace('Default - ', '默认 - '),
								};
							case 'communications':
								return {
    
    
									label: label.replace('Communications - ', ''),
									webLabel: label.replace('Communications - ', '通讯设备 - '),
								};
							default:
								return {
    
     label, webLabel: label };
						}
					})(
						formerIndex === latterIndex
							? device.label
							: device.label.substring(0, latterIndex),
						device.deviceId
					);
					return {
    
     label, webLabel, deviceId: device.deviceId };
				};
				let videoDevices = [],
					audioDevices = [];
				for (const index in devices) {
    
    
					const device = devices[index];
					if (device.kind === 'videoinput') {
    
    
						videoDevices.push(generateDeviceJson(device));
					} else if (device.kind === 'audioinput') {
    
    
						audioDevices.push(generateDeviceJson(device));
					}
				}
				store.dispatch(updateAvailableDevices(DEVICE_TYPE.VIDEO_DEVICE, videoDevices));
				store.dispatch(updateAvailableDevices(DEVICE_TYPE.AUDIO_DEVICE, audioDevices));
				resolve({
    
     video: videoDevices, audio: audioDevices });
			});
		} catch (error) {
    
    
			console.warn('获取设备时发生错误');
			reject(error);
		}
	});
}

By calling this function, we will get the current multimedia device information and send it to Redux for status update.
The code of the entire multimedia device module is as follows:

import {
    
     CustomerServiceOutlined } from '@ant-design/icons';
import {
    
     Button } from 'antd';
import {
    
     globalMessage } from 'Components/GlobalMessage/GlobalMessage';
import React, {
    
     useEffect, useState } from 'react';
import {
    
     DEVICE_TYPE } from 'Utils/Constraints';
import {
    
     updateAvailableDevices } from 'Utils/Store/actions';
import store from 'Utils/Store/store';
import {
    
     DeviceInfo } from 'Utils/Types';
import AudioDevices from './AudioDevices';
import VideoDevices from './VideoDevices';

export default function MediaDevices() {
    
    
	const [videoDevices, setVideoDevices] = useState(store.getState().availableVideoDevices);
	const [audioDevices, setAudioDevices] = useState(store.getState().availableAudioDevices);
	const [usingVideoDevice, setUsingVideoDevice] = useState('');
	const [usingAudioDevice, setUsingAudioDevice] = useState('');
	useEffect(
		() =>
			store.subscribe(() => {
    
    
				const storeState = store.getState();
				setVideoDevices(storeState.availableVideoDevices);
				setAudioDevices(storeState.availableAudioDevices);
				setUsingVideoDevice(`${
      
      (storeState.usingVideoDevice as DeviceInfo).webLabel}`);
				setUsingAudioDevice(`${
      
      (storeState.usingAudioDevice as DeviceInfo).webLabel}`);
			}),
		[]
	);

	useEffect(() => {
    
    
		getUserMediaDevices();
	}, []);

	return (
		<>
			<AudioDevices
				audioDevices={
    
    audioDevices}
				usingAudioDevice={
    
    usingAudioDevice}
				setUsingAudioDevice={
    
    setUsingAudioDevice}
			/>
			<VideoDevices
				videoDevices={
    
    videoDevices}
				usingVideoDevice={
    
    usingVideoDevice}
				setUsingVideoDevice={
    
    setUsingVideoDevice}
			/>
			<Button
				type='link'
				style={
    
    {
    
     fontSize: '0.9em' }}
				icon={
    
    <CustomerServiceOutlined />}
				onClick={
    
    () => {
    
    
					getUserMediaDevices().then(() => {
    
    
						globalMessage.success('设备信息更新完毕', 0.5);
					});
				}}>
				没找到合适的设备?点我重新获取设备
			</Button>
		</>
	);
}

/**
 * 获取用户多媒体设备
 */
function getUserMediaDevices() {
    
    
	return new Promise((resolve, reject) => {
    
    
		try {
    
    
			navigator.mediaDevices.enumerateDevices().then((devices) => {
    
    
				const generateDeviceJson = (device: MediaDeviceInfo) => {
    
    
					const formerIndex = device.label.indexOf(' (');
					const latterIndex = device.label.lastIndexOf(' (');
					const {
    
     label, webLabel } = ((label, deviceId) => {
    
    
						switch (deviceId) {
    
    
							case 'default':
								return {
    
    
									label: label.replace('Default - ', ''),
									webLabel: label.replace('Default - ', '默认 - '),
								};
							case 'communications':
								return {
    
    
									label: label.replace('Communications - ', ''),
									webLabel: label.replace('Communications - ', '通讯设备 - '),
								};
							default:
								return {
    
     label, webLabel: label };
						}
					})(
						formerIndex === latterIndex
							? device.label
							: device.label.substring(0, latterIndex),
						device.deviceId
					);
					return {
    
     label, webLabel, deviceId: device.deviceId };
				};
				let videoDevices = [],
					audioDevices = [];
				for (const index in devices) {
    
    
					const device = devices[index];
					if (device.kind === 'videoinput') {
    
    
						videoDevices.push(generateDeviceJson(device));
					} else if (device.kind === 'audioinput') {
    
    
						audioDevices.push(generateDeviceJson(device));
					}
				}
				store.dispatch(updateAvailableDevices(DEVICE_TYPE.VIDEO_DEVICE, videoDevices));
				store.dispatch(updateAvailableDevices(DEVICE_TYPE.AUDIO_DEVICE, audioDevices));
				resolve({
    
     video: videoDevices, audio: audioDevices });
			});
		} catch (error) {
    
    
			console.warn('获取设备时发生错误');
			reject(error);
		}
	});
}

Video Devices (VideoDevices.tsx)

Adhering to the principle of first easy and then difficult , let's bypass the audio device module and talk about the video device module. The entire module code is as follows:

import {
    
     Button, Select } from 'antd';
import React, {
    
     useEffect, useRef, useState } from 'react';
import {
    
     DEVICE_TYPE } from 'Utils/Constraints';
import eventBus from 'Utils/EventBus/EventBus';
import {
    
     getDeviceStream } from 'Utils/Global';
import {
    
     exchangeMediaDevice } from 'Utils/Store/actions';
import store from 'Utils/Store/store';
import {
    
     DeviceInfo } from 'Utils/Types';

interface VideoDevicesProps {
    
    
	videoDevices: Array<DeviceInfo>;
	usingVideoDevice: string;
	setUsingVideoDevice: React.Dispatch<React.SetStateAction<string>>;
}

export default function VideoDevices(props: VideoDevicesProps) {
    
    
	const [isExamingCamera, setIsExamingCamera] = useState(false);
	const examCameraRef = useRef<HTMLVideoElement>(null);
	useEffect(() => {
    
    
		if (isExamingCamera) {
    
    
			videoConnect(examCameraRef);
		} else {
    
    
			const examCameraDOM = examCameraRef.current as HTMLVideoElement;
			examCameraDOM.pause();
			examCameraDOM.srcObject = null;
		}
	}, [isExamingCamera]);

	useEffect(() => {
    
    
		const onCloseSettingModal = function () {
    
    
			setIsExamingCamera(false);
		};
		eventBus.on('CLOSE_SETTING_MODAL', onCloseSettingModal);
		return () => {
    
    
			eventBus.off('CLOSE_SETTING_MODAL', onCloseSettingModal);
		};
	}, []);

	return (
		<div>
			请选择录像设备:
			<Select
				placeholder='请选择录像设备'
				style={
    
    {
    
     width: '100%' }}
				onSelect={
    
    (
					label: string,
					option: {
    
     key: string; value: string; children: string }
				) => {
    
    
					props.setUsingVideoDevice(label);
					store.dispatch(
						exchangeMediaDevice(DEVICE_TYPE.VIDEO_DEVICE, {
    
    
							deviceId: option.key,
							label: option.value,
							webLabel: option.children,
						})
					);
					if (isExamingCamera) {
    
    
						videoConnect(examCameraRef);
					}
				}}
				value={
    
    props.usingVideoDevice}>
				{
    
    props.videoDevices.map((device) => (
					<Select.Option value={
    
    device.label} key={
    
    device.deviceId}>
						{
    
    device.webLabel}
					</Select.Option>
				))}
			</Select>
			<div style={
    
    {
    
     margin: '0.25rem' }}>
				<Button
					style={
    
    {
    
     width: '7em' }}
					onClick={
    
    () => {
    
    
						setIsExamingCamera(!isExamingCamera);
					}}>
					{
    
    isExamingCamera ? '停止检查' : '检查摄像头'}
				</Button>
			</div>
			<div
				style={
    
    {
    
    
					width: '100%',
					display: 'flex',
					justifyContent: 'center',
				}}>
				<video
					ref={
    
    examCameraRef}
					style={
    
    {
    
    
						background: 'black',
						width: '40vw',
						height: 'calc(40vw / 1920 * 1080)',
					}}
				/>
			</div>
		</div>
	);
}

async function videoConnect(examCameraRef: React.RefObject<HTMLVideoElement>) {
    
    
	const videoStream = await getDeviceStream(DEVICE_TYPE.VIDEO_DEVICE);
	const examCameraDOM = examCameraRef.current as HTMLVideoElement;
	examCameraDOM.srcObject = videoStream;
	examCameraDOM.play();
}

Users can use this module to replace the camera they need to use and test it.

Audio Devices (AudioDevices)

The functions provided by the audio device module are roughly the same as those of the video device module, but it mostly includes the function of testing the volume of the microphone. In this application, I implemented the test of microphone volume through AudioWorkletNode . First, you need to define a worklet script registration process publicunder :

// \public\electronAssets\worklet\volumeMeter.js
/* eslint-disable no-underscore-dangle */
const SMOOTHING_FACTOR = 0.8;
// eslint-disable-next-line no-unused-vars
const MINIMUM_VALUE = 0.00001;
registerProcessor(
	'vumeter',
	class extends AudioWorkletProcessor {
    
    
		_volume;
		_updateIntervalInMS;
		_nextUpdateFrame;
		_currentTime;

		constructor() {
    
    
			super();
			this._volume = 0;
			this._updateIntervalInMS = 50;
			this._nextUpdateFrame = this._updateIntervalInMS;
			this._currentTime = 0;
			this.port.onmessage = (event) => {
    
    
				if (event.data.updateIntervalInMS) {
    
    
					this._updateIntervalInMS = event.data.updateIntervalInMS;
					// console.log(event.data.updateIntervalInMS);
				}
			};
		}

		get intervalInFrames() {
    
    
			// eslint-disable-next-line no-undef
			return (this._updateIntervalInMS / 1000) * sampleRate;
		}

		process(inputs, outputs, parameters) {
    
    
			const input = inputs[0];

			// Note that the input will be down-mixed to mono; however, if no inputs are
			// connected then zero channels will be passed in.
			if (0 < input.length) {
    
    
				const samples = input[0];
				let sum = 0;

				// Calculated the squared-sum.
				for (const sample of samples) {
    
    
					sum += sample ** 2;
				}

				// Calculate the RMS level and update the volume.
				const rms = Math.sqrt(sum / samples.length);
				this._volume = Math.max(rms, this._volume * SMOOTHING_FACTOR);

				// Update and sync the volume property with the main thread.
				this._nextUpdateFrame -= samples.length;
				if (this._nextUpdateFrame < 0) {
    
    
					this._nextUpdateFrame += this.intervalInFrames;
					// const currentTime = currentTime ;
					// eslint-disable-next-line no-undef
					if (!this._currentTime || 0.125 < currentTime - this._currentTime) {
    
    
						// eslint-disable-next-line no-undef
						this._currentTime = currentTime;
						// console.log(`currentTime: ${currentTime}`);
						this.port.postMessage({
    
     volume: this._volume });
					}
				}
			}

			return true;
		}
	}
);

In the React project, I use a custom Hook to call this Worklet script to test the volume:

/**
 * 【自定义Hooks】监听媒体流音量
 * @returns 音量、连接流函数、断连函数
 */
const useVolume = () => {
    
    
	const [volume, setVolume] = useState(0);
	const ref = useRef({
    
    });

	const onmessage = useCallback((evt) => {
    
    
		if (!ref.current.audioContext) {
    
    
			return;
		}
		if (evt.data.volume) {
    
    
			setVolume(Math.round(evt.data.volume * 200));
		}
	}, []);

	const disconnectAudioContext = useCallback(() => {
    
    
		if (ref.current.node) {
    
    
			try {
    
    
				ref.current.node.disconnect();
			} catch (err) {
    
    }
		}
		if (ref.current.source) {
    
    
			try {
    
    
				ref.current.source.disconnect();
			} catch (err) {
    
    }
		}
		ref.current.node = null;
		ref.current.source = null;
		ref.current.audioContext = null;
		setVolume(0);
	}, []);

	const connectAudioContext = useCallback(
		async (mediaStream: MediaStream) => {
    
    
			if (ref.current.audioContext) {
    
    
				disconnectAudioContext();
			}
			try {
    
    
				ref.current.audioContext = new AudioContext();
				await ref.current.audioContext.audioWorklet.addModule(
					'../electronAssets/worklet/volumeMeter.js'
				);
				if (!ref.current.audioContext) {
    
    
					return;
				}
				ref.current.source = ref.current.audioContext.createMediaStreamSource(mediaStream);
				ref.current.node = new AudioWorkletNode(ref.current.audioContext, 'vumeter');
				ref.current.node.port.onmessage = onmessage;
				ref.current.source
					.connect(ref.current.node)
					.connect(ref.current.audioContext.destination);
			} catch (errMsg) {
    
    
				disconnectAudioContext();
			}
		},
		[disconnectAudioContext, onmessage]
	);

	return [volume, connectAudioContext, disconnectAudioContext];
};

The source code of the entire audio device module is as follows:

import {
    
     Button, Checkbox, Progress, Select } from 'antd';
import {
    
     globalMessage } from 'Components/GlobalMessage/GlobalMessage';
import React, {
    
     useEffect, useRef, useState } from 'react';
import {
    
     DEVICE_TYPE } from 'Utils/Constraints';
import eventBus from 'Utils/EventBus/EventBus';
import {
    
     getDeviceStream } from 'Utils/Global';
import {
    
     useVolume } from 'Utils/MyHooks/MyHooks';
import {
    
     exchangeMediaDevice } from 'Utils/Store/actions';
import store from 'Utils/Store/store';
import {
    
     DeviceInfo } from 'Utils/Types';

interface AudioDevicesProps {
    
    
	audioDevices: Array<DeviceInfo>;
	usingAudioDevice: string;
	setUsingAudioDevice: React.Dispatch<React.SetStateAction<string>>;
}

export default function AudioDevices(props: AudioDevicesProps) {
    
    
	const [isExamingMicroPhone, setIsExamingMicroPhone] = useState(false);
	const [isSoundMeterConnecting, setIsSoundMeterConnecting] = useState(false);
	const examMicroPhoneRef = useRef<HTMLAudioElement>(null);

	const [volume, connectStream, disconnectStream] = useVolume();

	useEffect(() => {
    
    
		const examMicroPhoneDOM = examMicroPhoneRef.current as HTMLAudioElement;
		if (isExamingMicroPhone) {
    
    
			getDeviceStream(DEVICE_TYPE.AUDIO_DEVICE).then((stream) => {
    
    
				connectStream(stream).then(() => {
    
    
					globalMessage.success('完成音频设备连接');
					setIsSoundMeterConnecting(false);
				});
				examMicroPhoneDOM.srcObject = stream;
				examMicroPhoneDOM.play();
			});
		} else {
    
    
			disconnectStream();
			examMicroPhoneDOM.pause();
		}
	}, [isExamingMicroPhone]);

	useEffect(() => {
    
    
		const onCloseSettingModal = function () {
    
    
			setIsExamingMicroPhone(false);
			setIsSoundMeterConnecting(false);
		};
		eventBus.on('CLOSE_SETTING_MODAL', onCloseSettingModal);
		return () => {
    
    
			eventBus.off('CLOSE_SETTING_MODAL', onCloseSettingModal);
		};
	}, []);

	const [noiseSuppression, setNoiseSuppression] = useState(
		localStorage.getItem('noiseSuppression') !== 'false'
	);
	const [echoCancellation, setEchoCancellation] = useState(
		localStorage.getItem('echoCancellation') !== 'false'
	);

	return (
		<div>
			请选择录音设备:
			<Select
				placeholder='请选择录音设备'
				style={
    
    {
    
     width: '100%' }}
				onSelect={
    
    (
					label: string,
					option: {
    
     key: string; value: string; children: string }
				) => {
    
    
					props.setUsingAudioDevice(label);
					store.dispatch(
						exchangeMediaDevice(DEVICE_TYPE.AUDIO_DEVICE, {
    
    
							deviceId: option.key,
							label: option.value,
							webLabel: option.children,
						})
					);
					if (isExamingMicroPhone) {
    
    
						getDeviceStream(DEVICE_TYPE.AUDIO_DEVICE).then((stream) => {
    
    
							connectStream(stream).then(() => {
    
    
								globalMessage.success('完成音频设备连接');
								setIsSoundMeterConnecting(false);
							});
							const examMicroPhoneDOM = examMicroPhoneRef.current as HTMLAudioElement;
							examMicroPhoneDOM.pause();
							examMicroPhoneDOM.srcObject = stream;
							examMicroPhoneDOM.play();
						});
					}
				}}
				value={
    
    props.usingAudioDevice}>
				{
    
    props.audioDevices.map((device) => (
					<Select.Option value={
    
    device.label} key={
    
    device.deviceId}>
						{
    
    device.webLabel}
					</Select.Option>
				))}
			</Select>
			<div style={
    
    {
    
     marginTop: '0.25rem', display: 'flex' }}>
				<div style={
    
    {
    
     height: '1.2rem' }}>
					<Button
						style={
    
    {
    
     width: '7em' }}
						onClick={
    
    () => {
    
    
							if (!isExamingMicroPhone) setIsSoundMeterConnecting(true);
							setIsExamingMicroPhone(!isExamingMicroPhone);
						}}
						loading={
    
    isSoundMeterConnecting}>
						{
    
    isExamingMicroPhone ? '停止检查' : '检查麦克风'}
					</Button>
				</div>
				<div style={
    
    {
    
     width: '50%', margin: '0.25rem' }}>
					<Progress
						percent={
    
    volume}
						showInfo={
    
    false}
						strokeColor={
    
    
							isExamingMicroPhone ? (volume > 70 ? '#e91013' : '#108ee9') : 'gray'
						}
						size='small'
					/>
				</div>
				<audio ref={
    
    examMicroPhoneRef} />
			</div>
			<div style={
    
    {
    
     display: 'flex', marginTop: '0.5em' }}>
				<div style={
    
    {
    
     fontWeight: 'bold' }}>音频选项:</div>
				<div
					style={
    
    {
    
    
						display: 'flex',
						justifyContent: 'center',
					}}>
					<Checkbox
						checked={
    
    noiseSuppression}
						onChange={
    
    (evt) => {
    
    
							setNoiseSuppression(evt.target.checked);
							localStorage.setItem('noiseSuppression', `${
      
      evt.target.checked}`);
						}}>
						噪音抑制
					</Checkbox>
					<Checkbox
						checked={
    
    echoCancellation}
						onChange={
    
    (evt) => {
    
    
							setEchoCancellation(evt.target.checked);
							localStorage.setItem('echoCancellation', `${
      
      evt.target.checked}`);
						}}>
						回声消除
					</Checkbox>
				</div>
			</div>
		</div>
	);
}

In addition to changing the test microphone and monitoring the volume, it also allows users to choose whether to use noise suppression and echo cancellation when connecting .

Participation status

The meeting status module is relatively simple, and only maintains whether the microphone and camera are turned on by default when joining a meeting for users. code show as below:

import {
    
     Checkbox } from 'antd';
import React, {
    
     useState } from 'react';

export default function MeetingStatus() {
    
    
    const [autoOpenMicroPhone, setAutoOpenMicroPhone] = useState(
        localStorage.getItem('autoOpenMicroPhone') === 'true'
    );
    const [autoOpenCamera, setAutoOpenCamera] = useState(
        localStorage.getItem('autoOpenCamera') === 'true'
    );

    return (
        <>
            <Checkbox
                checked={
    
    autoOpenMicroPhone}
                onChange={
    
    (e) => {
    
    
                    setAutoOpenMicroPhone(e.target.checked);
                    localStorage.setItem('autoOpenMicroPhone', `${
      
      e.target.checked}`);
                }}>
                与会时打开麦克风
            </Checkbox>
            <Checkbox
                checked={
    
    autoOpenCamera}
                onChange={
    
    (e) => {
    
    
                    setAutoOpenCamera(e.target.checked);
                    localStorage.setItem('autoOpenCamera', `${
      
      e.target.checked}`);
                }}>
                与会时打开摄像头
            </Checkbox>
        </>
    );
}

about

The last module will display application information. The core part of it is to detect whether the application needs to be updated. In order to achieve this, I first wrote a simple function to compare the version numbers.

function needUpdate(nowVersion: string, targetVersion: string) {
    
    
	const nowArr = nowVersion.split('.').map((i) => Number(i));
	const newArr = targetVersion.split('.').map((i) => Number(i));
	const lessLength = Math.min(nowArr.length, newArr.length);
	for (let i = 0; i < lessLength; i++) {
    
    
		if (nowArr[i] < newArr[i]) {
    
    
			return true;
		} else if (nowArr[i] > newArr[i]) {
    
    
			return false;
		}
	}
	if (nowArr.length < newArr.length) return true;
	return false;
}

The entire code about the module is as follows:

import {
    
     Button, Image, Progress } from 'antd';
import axios from 'axios';
import {
    
     globalMessage } from 'Components/GlobalMessage/GlobalMessage';
import React, {
    
     useEffect, useMemo, useState } from 'react';
import {
    
     eWindow } from 'Utils/Types';
import './style.scss';

function needUpdate(nowVersion: string, targetVersion: string) {
    
    
	const nowArr = nowVersion.split('.').map((i) => Number(i));
	const newArr = targetVersion.split('.').map((i) => Number(i));
	const lessLength = Math.min(nowArr.length, newArr.length);
	for (let i = 0; i < lessLength; i++) {
    
    
		if (nowArr[i] < newArr[i]) {
    
    
			return true;
		} else if (nowArr[i] > newArr[i]) {
    
    
			return false;
		}
	}
	if (nowArr.length < newArr.length) return true;
	return false;
}

export default function About() {
    
    
	const [appVersion, setAppVersion] = useState<string | undefined>(undefined);
	useEffect(() => {
    
    
		eWindow.ipc.invoke('APP_VERSION').then((version: string) => {
    
    
			setAppVersion(version);
		});
	}, []);

	const thisYear = useMemo(() => new Date().getFullYear(), []);

	const [latestVersion, setLatestVersion] = useState(false);
	const [checking, setChecking] = useState(false);
	const checkForUpdate = () => {
    
    
		setChecking(true);
		axios
			.get('https://assets.aiolia.top/ElectronApps/SduMeeting/manifest.json', {
    
    
				headers: {
    
    
					'Cache-Control': 'no-cache',
				},
			})
			.then((res) => {
    
    
				const {
    
     latest } = res.data;
				if (needUpdate(appVersion as string, latest)) setLatestVersion(latest);
				else globalMessage.success({
    
     content: '当前已是最新版本,无需更新' });
			})
			.catch(() => {
    
    
				globalMessage.error({
    
    
					content: '检查更新失败',
				});
			})
			.finally(() => {
    
    
				setChecking(false);
			});
	};

	const [total, setTotal] = useState(Infinity);
	const [loaded, setLoaded] = useState(0);
	const [updating, setUpdating] = useState(false);
	const update = () => {
    
    
		setUpdating(true);
		axios
			.get(`https://assets.aiolia.top/ElectronApps/SduMeeting/${
      
      latestVersion}/update.zip`, {
    
    
				responseType: 'blob',
				onDownloadProgress: (evt) => {
    
    
					const {
    
     loaded, total } = evt;
					setTotal(total);
					setLoaded(loaded);
				},
				headers: {
    
    
					'Cache-Control': 'no-cache',
				},
			})
			.then((res) => {
    
    
				const fr = new FileReader();
				fr.onload = () => {
    
    
					eWindow.ipc.invoke('DOWNLOADED_UPDATE_ZIP', fr.result).then(() => {
    
    
						setTimeout(() => {
    
    
							eWindow.ipc.send('READY_TO_UPDATE');
						}, 500);
					});
				};
				fr.readAsBinaryString(res.data);
				globalMessage.success({
    
     content: '更新包下载完毕,即将重启应用...' });
			});
	};

	return (
		<div id='settingAboutContainer'>
			<div>
				<Image
					src={
    
    '../electronAssets/favicon177x128.ico'}
					preview={
    
    false}
					width={
    
    '25%'}
					height={
    
    '25%'}
				/>
			</div>
			<div className='settingAboutFaviconText'>山大会议</div>
			<div className='settingAboutFaviconText'>SDU Meeting</div>
			<div id='settingVersionText'>V {
    
    appVersion}</div>
			{
    
    latestVersion ? (
				<>
					<div>检查到有新的可用版本:V {
    
    latestVersion},是否进行更新?</div>
					{
    
    updating ? (
						<>
							<Progress
								percent={
    
    Number(((loaded / total) * 100).toFixed(0))}
								status={
    
    loaded === total ? 'success' : 'active'}
							/>
						</>
					) : (
						<Button onClick={
    
    update}>开始下载</Button>
					)}
				</>
			) : (
				<Button type='primary' onClick={
    
    checkForUpdate} loading={
    
    checking}>
					检查更新
				</Button>
			)}
			<div id='copyright'>Copyright (c) 2021{
    
    thisYear ? ` - ${
      
      thisYear}` : ''} 德布罗煜</div>
		</div>
	);
}

settings-about
When the application detects a new version, it will download the latest version update package in the form of Blob. After the download is complete, the update package will be saved in a specific location through the function I wrote in electron .

const ipc = require('electron').ipcMain;
const fs = require('fs-extra');

ipc.handle('DOWNLOADED_UPDATE_ZIP', (evt, data) => {
    
    
	fs.writeFileSync(path.join(EXEPATH, 'resources', 'update.zip'), data, 'binary');
	return true;
});

Since some files that need to be replaced by the update package are occupied when the application is started, I wrote another function in electron to start a sub-process independent of the Shanda Conference application itself. After the Shanda Conference is automatically closed, Call an update (decompression) program I wrote in C++ to extract the content of the update package and overwrite the old files, so as to update the application.

// electron 中的更新进程
const {
    
     app } = require('electron');
const cp = require('child_process');

function readyToUpdate() {
    
    
	const {
    
     spawn } = cp;
	const child = spawn(
		path.join(EXEPATH, 'resources/ReadyUpdater.exe'),
		['YES_I_WANNA_UPDATE_ASAR'],
		{
    
    
			detached: true,
			shell: true,
		}
	);
	if (mainWindow) mainWindow.close();
	child.unref();
	app.quit();
}
// ReadyUpdater.cpp

#include <iostream>
#include <stdlib.h>
#include <tchar.h>
#include <Windows.h>
#include "unzip.h"
using namespace std;

int main(int argc, char* argv[])
{
    
    
	Sleep(300);
	if (argc < 2) {
    
    
		cout << "您正以不当方式运行该程序" << endl;
	}
	else {
    
    
		char* safetyKey = argv[1];
		if (strcmp("YES_I_WANNA_UPDATE_ASAR", safetyKey) != 0) {
    
    
			cout << "你不应当执行该程序" << endl;
		}
		else {
    
    
			HZIP hz = OpenZip(_T(".\\resources\\update.zip"), 0);
			SetUnzipBaseDir(hz, _T(".\\resources"));
			ZIPENTRY ze;
			GetZipItem(hz, -1, &ze);
			int numitems = ze.index;
			// -1 gives overall information about the zipfile
			for (int zi = 0; zi < numitems; zi++)
			{
    
    
				ZIPENTRY ze;
				GetZipItem(hz, zi, &ze); // fetch individual details
				UnzipItem(hz, zi, ze.name);         // e.g. the item's name.
			}
			CloseZip(hz);
			system("del .\\resources\\update.zip");
			cout << "更新完成" << endl;
			cout << "请重启应用" << endl;
		}
	}
	system("pause");
	return 0;
}

Guess you like

Origin blog.csdn.net/qq_53126706/article/details/125110713