2023 Google Developers Conference Records

Foreword:

In 2023, the Google Developers Conference will still be held in Shanghai, so it is the third time that I have applied for tickets to attend the Developers Conference.

First, leave a note at the door.

1. Keynote Speech

There are 7 sessions in total, with 7 different speakers.

Part One Keynote Speech (President of Google Greater China - Chen Junting)

First, a series of preparations are made for subsequent speeches, such as AI unlocking new business growth points and a series of talent training plans. He also introduced some of Google's actions in China, such as the Huiyu China Early Education Program - helping education in poverty-stricken areas, and Google Philanthropy, etc.

Google Arts & Culture

Part Two – Integrating AI into Work (Jeanine Banks)

Part Three-Mobile Terminal (Speaker Shi Jingyu)

This is still just a rough introduction. There will be a detailed introduction to each small link in the afternoon sharing.

Some features of Android

Mobile applications run on PC

Some new support libraries

Development efficiency issues

Introduction to Flutter

Android Studio also supports code generation during chat

How to create multiple screens

Mobile phones and TVs can also use compose

Part 4 - WEB (Speaker: Paul Kinlan)

Introduction to WEB, such as baseline support for the web, and some new support for Chrome, etc.

Part 5 - AI (speaker Jason Mayes)

There are actually some repetitions in what this person shared, and he also gave an example of playing games with facial recognition. I will introduce it in a little detail here.

A game enthusiast had his hands damaged due to a fire and could no longer use his hands to play games. Gameface solves this problem by controlling operations through facial expressions, so that game lovers can play games happily again.

Part 6-Google Cloud (Speaker Liu Ting)

Our use of AI is mainly divided into two aspects. On the one hand, we use AI to assist us in search, and VertexAI implements internal AI search within the enterprise;

On the other hand, generative AI is similar to GPT and can solve problems with us through question and answer chat. This is PaLM

Part 7 - Developer Community (Speaker: David)

2. Sharing on the cutting edge of mobile development technology

The second chapter is mainly about Shi Jing, director of Google's technical promotion department, and Gao Hanrui, Google's developer relations engineer. This part is actually a refinement of the keynote speech. Of course, there will be more detailed sharing in the afternoon.

Build apps for a multi-screen world

Jetpack WindowManager 和WindowSizeClass

Platform and application quality

The camera's HDR has begun to support applications. Normally, only the night mode shots taken by the native camera are clear, but now, the application can also support it.

Unified management of login keys

and the Health Connect function

Development efficiency

Introduction of Glance library;

The use of compose currently supports TVs and watches, and may be extended to cars in the future.

Kotlin is not introduced much, but it mentions overall performance improvement. The performance here mainly refers to coroutines.

Design specifications and material website:

3. Device-side machine learning

The third chapter mainly introduces the client-side learning framework MediaPipe.

Use MediaPipe to easily implement device-side learning (Wang Lu)

Generally speaking, machine learning on the client side is indeed troublesome to operate, and MediaPipe solves this problem for us. It is a low-cost solution to client-side learning.

Normal device-side learning needs to be broken down into the following four steps to complete the recognition. However, if you use MediaPipe, it can be done with just a few lines of code.

And how to use it in web pages:

And also introduced us to the development tool for client-side learning, MediaPipe Studio.

Flutter relies on machine learning to display

Mainly introduces how to generate images and text in flutter

Then the morning part is over, have lunch, and then talk about the afternoon part:

4. How to make application quality reach new heights

The speaker in this section is mainly Google Developer Relations Engineer-Lin Chufeng.

I originally thought that the Google Developer Conference would introduce some new features of Android 14, but the result really disappointed me. There were very few features introduced to Android 14. It felt like Android 14 was not ready yet, so it was not released.

Since we are talking about application quality here, we will definitely not only introduce native development, but also include Flutter and compose.

The first is still Flutter. Here we introduce the advantages of Flutter. The Flutter framework is used to smooth out platform differences. Therefore, developing a set of Flutter is suitable for all terminals.

Then I gave a preliminary introduction to Flutter’s atoms and the underlying implementation principles of the Flutter framework:

The next part is to introduce the native quality support. Mainly divided into 6 parts:

1. HDR video/picture support; when using the camera, the app can also use the HDR function to enhance the display effect. This section is actually a supplement to the morning content.

2. Advanced camera features. This is actually still adding support for the camera.

3. Video editing. As described in the figure, it will not be expanded.

4. Excellent audio. Same as above, no extension.

5. Device-side machine learning. The main introduction here is actually MediaPipe.

6. Large screen devices.

When using the camera function, you can use the cameraX and CameraViewFinder libraries to help us develop.

It is recommended to use BaseLine again. This is actually not a new thing, it has been around for a long time. To put it simply, the principle is that the user declares the code that the user is most likely to use most frequently in advance. When running for the first time, the corresponding code will be converted into machine code. In this way, the method of using AOT directly with machine code in the future is naturally better than The JIT method is faster. Of course, in addition to this basic function, certain DEX optimizations have also been made, but these require a higher version of gradle to experience.

Next is an introduction to some of the only features of Android 14. A new short-term foreground service type has been added, which allows the application to do some operations before exiting.

If you want to use short-term front-end service, the configuration is as follows:

Then in terms of technical quality, ADPF was also introduced. The introduction time was relatively short, and I didn’t understand it clearly. I will have time to prepare to learn more about it later.

Next, there are some mandatory requirements. Some chips are no longer compatible with 32-bit, so models equipped with these chips naturally do not support 32-bit. Therefore, in the near future, we will be able to package only 32-bit so into our APK.

The second mandatory requirement is that on Android 14 devices, those with targetSDK lower than 23 (6.0) will not be able to run Android. And those below 28 (9.0) will also receive a warning.

The next part is privacy and Android. I feel that this is of little significance to our development, so I won’t introduce it in detail.

5. Improve development work efficiency

When it comes to development efficiency, Google naturally recommends compose, a framework used to replace Android native page development. It uses concise writing and natural data attempt binding to greatly reduce developers' invalid code, thus improving efficiency. This time, compose has added some new functional support.

Android 14 begins to support Java 17, and it should introduce some new feature optimizations of the JVM virtual machine.

Then a series of methods to improve efficiency are introduced.

The first is the tools, mainly some functional updates of Android Studio

and Studio Bot, an AI-based code generation tool.

The second part is Compose design specifications

Google has defined a lot of design specifications for the Compose space. If you follow these specifications, the style can be unified. And we can also save part of our development workload by accessing such a design specification library.

The third part is accessibility

This part is still very user-friendly. In fact, many application developers will simply ignore the user experience of people with disabilities, but this is actually not good. Google provides us with a series of solutions to assist us in application development and optimization of barrier-free use.

The fourth part is privacy and security

This section mainly introduces Checks, a platform for privacy compliance checking.

6. Play with multi-device experience

mobile device

It mainly introduces how to improve the user experience of some large-screen devices.

We are given examples of three levels of experience:

In addition to large screens, there is still the problem of state loss when switching screens. At this time, we should use onSaveInstanceState and other solutions to solve this problem.

And the problem of resource preemption shared by multiple applications at the same time:

In the large screen state, some activities are still visible after pase, that is, they lose focus. So we can't release resources at this time.

Compose for TV

This section is repeated from the one in the morning. This section only provides a more detailed explanation, but in fact there is not much difference.

We also designed multiple screens for cars, but this aspect is rarely introduced.

7. Android Fireside Chat

This session of Android fireside chat was not available at the last developer conference. I feel very interested in this area. It provides an opportunity for us ordinary developers to communicate with Google. Ordinary developers can also communicate directly with Google, put forward some of their own ideas or ask some technical questions.

Of course, because those involved are Google's developer relations engineers and products, not front-line R&D, some of the answers may not be very accurate.

Everyone raised about a dozen questions. I have collected some questions here and recorded Google’s responses.

1. How to limit the chain startup problem?

This question was raised by me. When we use some applications, such as UC Browser, if we accidentally touch or shake the phone, it will jump to the advertising page of the third-party APP, which makes the user experience extremely poor. I originally wanted to ask whether this kind of problem can be solved through permission control, such as adding a permission to allow jumping to third-party applications. But the reply given to me is that they prefer to solve it through the app store. That is, if there is such a redirected app and there is no reasonable explanation, it is not allowed to be put on the app store.

2. Is there any way Google can reduce the package size problem?

The package size is actually divided into two parts. The first part is the APK installation package, and the second part is the storage space occupied by files that are continuously written after the application is installed.

In fact, in my opinion, this has little to do with Android. This is business logic that the application should handle by itself, and should be optimized by the application itself. The answer given by Google is to try to store temporary files in temporary folders (such temporary folders will be cleaned regularly).

3. What are your views on reinforcement?

Because I am a Google developer relations engineer, my answer is actually related to confusion. This is actually a bit wrong. Strengthening this area is actually about how to prevent APK decompilation.

4. Can the APP also support PC in the future?

As mentioned above, through the Google Store, some mobile games can also support PC. But for APP, it is still not supported.

5. Gradle is not backward compatible after upgrade?

Gradle is a project construction tool, but every time Google upgrades it, it actually does not consider compatibility issues, so it is not backward compatible. Google is indeed not very optimized in this area. The official answer is to submit a work order for feedback.

6. What is the ecosystem of rust in Android?

This is being worked on, but the progress is hard to say.

Finally, if you have any questions to ask Google, or if you have some good suggestions, you can give feedback to Google through: https://issuetracker.google.com/.

8. Developer interactive experience

The scale of this year's developer conference is obviously not as good as last year, and the developer interactive activities are also extremely simple. It basically took 10 minutes to finish the tour. There was not much to see. The only thing that impressed me deeply was "Wonderful Painting and Painting Brilliantly", so I went in to experience it for a while.

9. Summary

I still made a summary of this experience, divided into two parts: advantages and disadvantages. I don’t know if Google’s official staff can see it. I just hope they can read this article if they have the opportunity to understand the experience of us ordinary developers. Feel it.

Advantage:

1. There are a lot more dialogue links, which is really good.

Disadvantages:

1. The structure of this sharing is very confusing. For example, the new features of Android 14 are divided into several aspects such as quality and efficiency, which are incoherent. And in fact, under the topic of quality, a lot of content has little to do with quality.

2. There are many overlaps. For example, the adaptation of large-screen devices has been introduced many times, and the scale is almost the same.

3. The experience is not enough. In terms of developer interactive experience, there is no Android 14 mobile phone, and there is no live experience application of compose/flutter. If there is no on-site intuitive experience, how can we explain that compose is already well supported on TV and flutter has completely smoothed the platform differences?

Guess you like

Origin blog.csdn.net/AA5279AA/article/details/132897679