Quickly build a Web AR application

AR is the abbreviation of augmented reality, which can superimpose additional information on the real-world image to enhance the ability to express reality. One of the most famous AR applications is Pokémon. Players use their mobile phones to capture Pokémon in the real world.

Generally, AR applications can process real-world images in different ways and then generate AR information. For example, judging images or judging based on GPS location information to see whether AR content should be presented. In order to obtain a good AR effect, there are certain requirements for the hardware of the mobile phone, because the mobile phone needs to be calibrated so that it can accurately identify the surface of the object and better place the AR model. For example, Google's mobile phones can only provide better AR capabilities if they have obtained AR certification.

There are also many softwares for making AR applications, such as Unity's AR foundation, which integrates Google's AR Core and Apple's AR kit, making it easy to make cross-platform AR applications. I also tried to use Unity to create AR, but found that I need a suitable mobile phone that supports AR to test the effect. Although the two Xiaomi mobile phones I have are Google certified models, the effects are very different. It is too big, so I can only give up using Unity. After searching on the Internet, I found that there is actually another more convenient way to quickly build AR applications for mobile phones, and that is the Web AR method. There are many development frameworks that support this method, such as AR.js, webar -rocks and so on, I chose AR.js for development.

AR.js integrates several frameworks such as ARToolkit, A-Frame, Three.js, etc. ARToolkit provides AR engine related functions, such as detecting graphics, measuring the distance between the camera and objects, etc. A-Frame and Three.js are two frameworks that provide WebGL rendering capabilities. The introduction on the official website is also very concise and easy to understand. You can refer to the AR.js Documentation

Here I will introduce how to build an AR application using AR.js. AR.js supports AR triggering methods such as image tracking, marker recognition, and outdoor positioning. Here I build an indoor AR application based on image tracking.

First, select several indoor locations to place AR content and take pictures of these locations. It should be noted that the more complex the picture content, the better, because we will then process the pictures and extract the characteristics of the pictures. The more complex the picture, the richer the outlines it contains, such as straight lines, corners, arcs, etc., so that it can be positioned more accurately. In addition, the more pixels the picture contains, the better, so we try not to shoot with mobile phones. The image is reduced in resolution. After that, we can upload the image to this website NFT-Creator-Web to extract the characteristics of the image. This website will generate three files for each image, including the characteristics of the image. Save these generated files.

Then we have to create AR content. Here I found a good website with many beautiful free 3D models for download, Explore 3D Models - Sketchfab . Here we try to choose the GLTF format when downloading the model, because AR.js It is also recommended to use this format.

Finally, we can create an HTML file in the web server with the following content:

<script src="https://cdn.jsdelivr.net/gh/aframevr/aframe@1c2407b26c61958baa93967b5412487cd94b290b/dist/aframe-master.min.js"></script>
<script src="https://raw.githack.com/AR-js-org/AR.js/master/aframe/build/aframe-ar-nft.js"></script>
<script src="https://raw.githack.com/fcor/arjs-gestures/master/dist/gestures.js"></script>
<style>
  .arjs-loader {
    height: 100%;
    width: 100%;
    position: absolute;
    top: 0;
    left: 0;
    background-color: rgba(0, 0, 0, 0.8);
    z-index: 9999;
    display: flex;
    justify-content: center;
    align-items: center;
  }

  .arjs-loader div {
    text-align: center;
    font-size: 1.25em;
    color: white;
  }
</style>

<body style="margin : 0px; overflow: hidden;">
  <!-- minimal loader shown until image descriptors are loaded -->
  <div class="arjs-loader">
    <div>Loading, please wait ...</div>
  </div>
  <a-scene
    vr-mode-ui="enabled: false;"
    renderer="logarithmicDepthBuffer: true; precision: medium;"
    embedded
    arjs="trackingMethod: best; sourceType: webcam;debugUIEnabled: false;"
    gesture-detector    
  >

    <!-- we use cors proxy to avoid cross-origin problems ATTENTION! you need to set up your server -->
    <a-nft
      type="nft"
      url="marker/marker"
      smooth="true"
      smoothCount="10"
      smoothTolerance=".01"
      smoothThreshold="5"
      raycaster="objects: .clickable"
      emitevents="true"
      cursor="fuse: false; rayOrigin: mouse;"
    >
      <a-entity
        gltf-model="/models/medieval/scene.gltf"
        scale="5 5 5"
        position="150 -100 -150"
        class="clickable"
        gesture-handler="minScale: 0.25; maxScale: 10"        
      >
      </a-entity>
    </a-nft>

    <a-nft
      type="nft"
      url="ntf/location/location_1"
      smooth="true"
      smoothCount="10"
      smoothTolerance=".01"
      smoothThreshold="5"
      raycaster="objects: .clickable"
      emitevents="true"
      cursor="fuse: false; rayOrigin: mouse;"
    >
      <a-entity
        gltf-model="/models/rocket/scene.gltf"
        scale="1 1 1"
        position="50 -100 -50"
	class="clickable"
        gesture-handler="minScale: 0.25; maxScale: 10"
      >
      </a-entity>
    </a-nft>

    <a-entity camera></a-entity>
  </a-scene>

</body>

To briefly explain, the first three Script tags introduce related JS files, and the third one is used for gesture recognition. The last two Sricpt addresses here require a foreign agent to access them. It is recommended to download them locally first. Then the <a-scene> tag is used to create an AR scene. Because we have two image tracking in this scene, it includes two <a-nft> tags. In <a-nft> we need to define the url, which is the path to the image feature file we saved before. In the <a-entity> tag we need to specify the path of the 3D model we want to load.

After starting the web server, access this html to test the AR application. Note that the web server here needs to be accessed via https because the camera needs to be turned on. According to Google's settings, only websites called through https can turn on the camera.

Guess you like

Origin blog.csdn.net/gzroy/article/details/127470665