12. Web augmented reality

AR.js displays AR content based on three methods:
1) Image Tracking
2) Location Based AR
3) Marker Tracking
        IMarker Tracking is the most common maker-based AR display method. The following example is based on this. Image Tracking, as the name suggests, is to display AR content based on a picture. In fact, the principle is similar to Marker Tracking, which is to identify and track pictures based on their characteristic points. AR.js integrates two frameworks, A-Frame and three.js, and Image Tracking can be implemented based on these two frameworks.
        Image requirements: There are certain requirements for images used for Image Tracking. In principle, the more detailed the images, the better. It is recommended to use pictures of 300dpi and above. Pictures of 72dpi are barely acceptable, but the AR display equipment must be very close and must remain still.
        Generate Image Descriptors corresponding to the image. AR.js officially provides web tools that can convert images into Image Descriptors. AR.js actually performs image recognition and tracking positioning based on the generated Image Descriptors. The Image Descriptors generated from the image include three files, which are files with the suffix name .fset, .fset3, and .iset. Assume that the Image Descriptors file you generated is named: demo.fset, demo.fset3, demo.iset, then the name of your Image Descriptors is demo (remove the suffix).


1. Use nginx to build a static server
1) Download mkcert and run the following command
mkcert -install
mkcert localhost 127.0.0.1 www.myhost.com 172.20.10.4
Copy the two generated files to the keys directory of the project root directory
2) Download nginx
After downloading, switch to the download directory, configure, and start:
cd /d D:\sdk\nginx1.15\
nginx.conf:
    server {         listen 443 ssl;         server_name 172.20.10.4;

        ssl_certificate      d:/sdk/localhost+1.pem;
        ssl_certificate_key  d:/sdk/localhost+1-key.pem;

        location / {             root D:/workspace/ai/arjs;             index index.html index.htm;         }     } nginx -c conf\nginx.conf If there is a port conflict, kill netstat -aon|findstr "443" taskkill -f - pid 18500 access test: https://172.20.10.4/









2. Download the model
https://sketchfab.com/Sketchfab/models
. Download any free model in gltf format;

3. Write application
open source software: https://github.com/AR-js-org/AR.js
After entering, click Tags, download all js at the following address, and place it in the D:/workspace/ai/arjs directory
https ://github.com/AR-js-org/AR.js/releases/tag/3.4.5
Write index.html under D:/workspace/ai/arjs. Note that mynft and scene.gltf here must exist in relative directories, otherwise they cannot be loaded.

<!-- import aframe and then ar.js with image tracking / location based features -->
<script src="aframe-master.min.js"></script>
<script src="aframe-ar-nft.js"></script>
<script src="gestures.js"></script>

<head>
  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>

<!-- style for the loader -->
<style>
  .arjs-loader {
    height: 100%;
    width: 100%;
    position: absolute;
    top: 0;
    left: 0;
    background-color: rgba(0, 0, 0, 0.8);
    z-index: 9999;
    display: flex;
    justify-content: center;
    align-items: center;
  }

  .arjs-loader div {
    text-align: center;
    font-size: 1.25em;
    color: white;
  }
</style>

<body style="margin : 0px; overflow: hidden;">
  <!-- minimal loader shown until image descriptors are loaded. Loading may take a while according to the device computational power -->
  <div class="arjs-loader">
    <div>正在加载AR模型,请稍候...</div>
  </div>

  <!-- a-frame scene -->
  <a-scene vr-mode-ui="enabled: false;" gesture-detector renderer="logarithmicDepthBuffer: true;" embedded
    arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;">
    <a-nft type="nft" url="nft/mynft" smooth="true" smoothCount="10" smoothTolerance=".01"
      smoothThreshold="5">
      <!-- 展示模型的路径 -->
      <a-entity gltf-model="model/scene.gltf" scale="50 50 50"
        gesture-handler="minScale: 0.25; maxScale: 10" position="100 0 -200" rotation="-90 0 0">
      </a-entity>
    </a-nft>
    <!-- static camera that moves according to the device movemenents -->
    <a-entity camera="fov: 190"></a-entity>
  </a-scene>
</body>

4. Mobile phone access test.
    Make sure that the mobile phone and computer are on the same wifi network.
    Use ipconfig to check the IP address of Windows. There are multiple addresses. Be careful not to make a mistake.
    Then use your mobile phone's browser to access: https://172.20.10.2/.
    The effect is the same as accessing it on a computer, and the URL is the same.
    At this time, scan the specified picture with your Apple phone, and a 3D image will be displayed above the picture.
    Note that if you are using an Apple phone and the camera is not turned on after loading, it is recommended not to use safri and just download the chrome browser.

The effect is as follows:

 

Guess you like

Origin blog.csdn.net/vandh/article/details/131710142