[Product Design] The technical reasons behind those product designs that are forced to compromise

Product managers who are just getting started often hear seniors say that they should understand some technology, but they don't understand why. The author of this article shared several examples of product design that were forced to compromise, hoping to let product managers who are not technical backgrounds understand the guiding significance of "product managers should understand technology" in product design. Let's take a look.

insert image description here
Product managers who are just getting started often hear from seniors or product "experts" on the Internet that they should know a little bit of technology. Saying that they need to understand the server makes them feel that they should become a full-stack engineer before becoming a product manager.

Now the threshold for product managers is getting lower and lower. On the one hand, there are more and more products on the market, and the homogeneity is becoming more and more serious. Sometimes product managers can make good interaction designs with a little reference. Sometimes they may not be clear why they should design in this way; on the other hand, product managers are often pointed out in the technical review that they are not friendly to technology in design, and make adjustments according to the requirements of R&D engineers, so some even if they do not understand technology at all Product managers can also do their jobs well.

The few small cases shared below hope that product managers who are not technical backgrounds can understand the guiding significance of "product managers should understand technology" in product design.

When we use software products on a daily basis, we are actually interacting with server resources. The general process is as follows:
insert image description here
when we perform an operation (such as clicking a button) on the access carrier (browser, APP or applet), At this time, a request will be sent to the server (the content of the request is determined by the specific operation), and the server will "response" the content corresponding to the request after receiving it. At this time, the access carrier will "render" the content of the response. Completed a resource interaction.

You can think of the server as a virtual merchant that deals in various commodities, and the process of our request is to tell the merchant the commodities we need, and the merchants provide us with commodities according to our needs.

You can understand the above process like this: You tell (send a request) the merchant (server) that you need an apple (request information), and the merchant will give you the apple (response) after receiving the information.

The following cases require you to understand the above principles first.

1. Pagination

Paging design can be seen everywhere in software products. The picture below is a basic pager, which supports operations of previous page, next page and jumping to a specified page.
insert image description here
Paging itself is a compromise design. If there is little data, there is no need for paging. If there is a lot of data, paging is useless. It basically depends on search. You can think about whether the most used pager is "next page "?

Some information flow pages have adjusted the paging function, as shown in the figure below, there is only one "next page" (load more) operation, each click loads a page of data, and the previously loaded data remains in the On the page, this technique is called "lazy loading".

insert image description here
Later, for the convenience of users, "lazy loading" evolved a new form of interaction, that is, when the list is about to turn to the end of the page, the page will automatically "lazy loading". page operations.
insert image description here
Even if it is "lazy loading", it still needs to turn the page in essence, which just weakens the user's perception of the paging operation. Why not display all the content directly? What is the point of pagination?

As you can imagine, if you buy 1 ton of apples from a merchant and ask the merchant to put this ton of apples into your warehouse at one time, it will take time for the merchant to pack, transport, and unload at the destination. Too many, these times add up to a very long time, and any problem in any link may cause the apples you receive to be wrong. For example, the merchants have insufficient packing workers and cannot handle so many apples; The transportation time is too long, and the apples are rotten or lost during the process; for example, when they arrive at the destination, they find that the warehouse is too small to accommodate so many apples.

Technically speaking, when you send a request, if the amount of data requested is large, the server may "time out" when processing a large amount of data, and the risk of "distortion" increases during data transmission. "Feign death" or even "crash", these may cause you to be unable to obtain complete information.

insert image description here
Pagination can solve this problem.

First of all, after the server receives the request, it only responds to 1 page of data (the specific number of data on a page is determined by the paging logic). Select the next page or jump to the specified page. After the user sends an instruction through the pager, the server returns the data of the corresponding page according to the instruction. In this way, the change from "full supply" to "on-demand supply" reduces each The amount of data transmitted not only improves the user experience in terms of waiting time, but also reduces the burden on the server.

insert image description here
Pagination is like you tell the merchant that you want 1 ton of apples, and the merchant says it’s okay, but they only give you 100 catties each time. When you need more, you tell the merchant again, and the merchant will give you another 100 catties until 1 ton All the apples will be given out, which can reduce the burden on the merchants, shorten the transportation time, reduce the transportation risk, and leave enough room for the warehouse.

Now many pagers will provide the operation of how many pieces of data are displayed on each page. This is like you can tell the merchant how many catties of apples to provide you each time. The maximum amount of data per request is controlled within the range that the system can handle stably. But at the end of the day, it's always been a product design that was forced to compromise.
insert image description here

2. Real-time search

Real-time search is not a design that is forced to compromise, on the contrary, it is a very user-friendly design with an excellent experience.
insert image description here
What is real-time search? When we provide search conditions, the system matches the content in real time according to the search conditions. As shown in the figure above, when we enter keywords, the system will automatically query the matching results.

However, this design is not seen everywhere in product design. In system design, it is more often to manually trigger queries after entering keywords as shown in the figure below. Why can't real-time search be popularized in product design, but be forced to compromise and adopt such a design.

insert image description here
First of all, we need to know that each request for real-time search may be a query with inaccurate or incomplete keywords, and the final result may be obtained after multiple rounds of queries, while manual trigger search can only be done if the keywords are accurate enough , the final result can be obtained only through one round of query.

Still the apple example, real-time query means that you first tell the merchant that you need an apple, and the merchant will respond to you immediately, but the word "apple" is too general. Do you want an iPhone or an edible apple? The merchant is not clear, so they can only give you all the eligible products; at this time, you add the keyword "Xinjiang" to the merchant, and the merchant guesses that you want to buy apples from Xinjiang; The keyword "free shipping" is added, and at this time, the merchant finds out among all the apples that are produced in Xinjiang that support free shipping. And manually triggering the search means that you directly tell the merchant "Apple Xinjiang free shipping", and the merchant will directly find you eligible results according to your requirements.

From the above example, we can see that the number of real-time search query requests is more, and the resource consumption of the server is also greater. In addition to the pressure on the server side, the front-end access carrier needs to render the query results multiple times in a short period of time, and render while querying. It may cause poor experience results such as page freeze.
insert image description here
In contrast, to complete the search for the same keyword, manually triggering the query only requires one query and one rendering, which is more friendly to the server and the access carrier.
insert image description here
You may think that manually triggering the search is born to avoid the pressure on the server and the access terminal caused by the frequent requests brought by real-time search, and was forced to compromise. In fact, the query first existed in the form of manual trigger query, and real-time search It was born because of a better experience and is used in specific scenarios.

Under what circumstances can a real-time search design be used?

1) The data volume is small and the growth is controllable or will not grow

For example, the country's administrative division data will not be large in magnitude, and the data of dozens of provinces or hundreds of cities will not be added at every turn.

2) The query condition is relatively single

Now there are many platforms that have the function of aggregate search. An input box can query N fields of the database. If this kind of real-time search is done, the server pressure will be very high.

3) Front-end secondary query

Query is the process in which the access terminal sends keywords to the server, and the server returns the results. However, sometimes we will directly perform a secondary query on the access terminal when we get the query results from the server. For example, if I inquire about the cities in Guangdong from the server, the server returns all the city names in Guangdong. At this time, if I want to check whether there are any of these cities that I am looking for, I can do a second query on the access end, because the results have already been obtained in advance. Obtained from the server, now it is a second query on the results, no need to request the server again, in this case, real-time search can also be done.

3. Progress bar and Loading

The progress bar is a great design; the greater design than the progress bar is the progress bar percentage, which we often see when uploading; the greater design than the progress bar percentage is the progress completion remaining time, which we often see when downloading .
insert image description here
But the waiting time does not always appear in the form of a progress bar. Sometimes it is called Loading to accompany us through the waiting time.
insert image description here
If the progress bar is stuck at 99.9% is a maddening point in time, then when the Loading animation appears, it is another maddening point in time. When the Loading animation appears, it means a message, that is, there is no information, we don't know how long it will last, we don't know when it will end, the loading time is too long, we don't even know whether it has not finished loading or has been loaded "Dead" and can't do anything. We have nothing to do but wait.

Although we really hope that when the access terminal sends a request, we can get all the information immediately, but because the processing of the server, response, rendering of the access terminal, including storage (download) at the physical end, etc., all take time, in order to visualize this time, so it appears The progress bar is displayed, so why does the Loading design appear? Why not replace all loading with progress bars?

1) too lazy to do

Because the progress bar needs to calculate the total progress and the completed and unfinished progress to draw a visual progress bar graph, including percentage and remaining time, which also needs to be calculated. Compared with directly displaying the Loading animation, the amount of development is greater. , Sometimes it’s not necessarily that developers are too lazy to do it, but that companies or product managers replace the design of progress bars with Loading at the expense of user experience based on time and cost considerations.

2) Low benefit

This is a design that chooses to abandon the progress bar and use Loading instead in some specific scenarios. For example, when uploading pictures, the system limits the maximum upload of 2M pictures. Under the 5G network speed, such a picture can be uploaded in an instant. The upload is finished. In this case, the user may have seen the prompt that the upload is successful before seeing the progress bar, so the progress bar is meaningless. On the contrary, sometimes in order to present a complete progress bar animation, the picture has been After uploading, the progress bar animation has not finished playing. For users, the experience is even worse.

3) Difficult to quantify

If I have 10 apples and eat one, you can say that I have eaten 10%, but if I take a bite and ask you how much I ate, it is difficult to answer accurately. Similarly, the upload and download of large files can calculate the time and percentage based on the file size and network speed, but it is difficult to quantify when querying and reading text data.

Although Loading is a design that is forced to compromise, the product managers who are "dedicated" to the user experience every day still try to make it as friendly as possible. We can look at several stages of development of Loading.

1. When the Loading animation appears, the entire page is masked, and nothing can be done. The Loading animation will not disappear after the loading timeout, and the page can only be refreshed.
insert image description here
2. The page is loaded by area, and the Loading animation only appears in the corresponding loading area, and the loading does not affect the operation of other areas.
insert image description here
3. On the basis of the previous stage, add a delay time to Loading, such as 2 seconds. If the data is loaded within 2 seconds, the Loading animation will not appear; set the loading timeout time, such as 10 seconds after loading If it has not come out yet, it may have failed to load, then stop the loading and Loading animation, and allow the user to manually click to reload.
insert image description here
When to use a progress bar? Under what circumstances is Loading used? When the waiting time is long, but the data can be quantified, use the progress bar, such as uploading, downloading, etc.; when the waiting time is short, or the volume of processed data is small, or it is difficult to quantify, use Loading, such as submitting, query data etc.

4. Concurrency

Sometimes when we perform some operations, such as clicking some buttons, we will find that it has a processing process, and we are not allowed to click multiple times before the processing is completed, as shown in the figure below.
insert image description here
There is also a button to send an SMS verification code, and the waiting time is longer, requiring us to wait for 60 seconds.
insert image description here
We have such an experience that when we eat in a restaurant, if it takes too long to serve the food, we usually ask the waiter to go to the back kitchen to urge the chef.

We can imagine this kind of scene, we just asked a waiter to urge the food, the waiter has not yet reached the back kitchen, we asked another waiter to urge, similarly, the second waiter has not yet reached the back kitchen, and we Let the third waiter go to urge, and then the chef will receive 3 waiter requests to urge dishes in a row. Think about it, what will happen to this chef.

Or we imagine another scenario, we ask the waiter to urge the food. At this time, the guests at the other two tables also ask the waiter to urge the food. We assume that if the three waiters happen to be the same chef, you can Think about it, what would happen to this chef.

The example mentioned above is called "concurrency" in technology. The first scenario is that the same user sends the same request repeatedly in a short period of time; the second scenario is that multiple users send the same request at the same time.

In the first scenario, it may be because the user double-clicks the button habitually, or the system response is not timely enough, making the user think that the system is suspended animation, the user will habitually click multiple times, or the system encounters violent requests from viruses or web crawlers. In this situation, the system will receive multiple identical requests in a short period of time. At this time, the system often has to intervene to process new requests before processing the previous requests. If the number of requests reaches a certain amount in a short period of time, it may directly cause the system to Stuck.

For the concurrency of this scenario, there are generally two ways to deal with it.

1) Front-end optimization

After the function is triggered, no further clicks are allowed until processing is complete. This is like you just asked the waiter to urge the food, and then asked the waiter to urge you again, and the waiter told you that you just urged, so wait patiently.

2) Backend strategy

Multiple clicks within a short period of time are only processed once. For example, if a button is clicked continuously within 1 second, the system only treats it as one click. This is like you just asked the waiter to urge the food, and then you asked the waiter to urge you again. The waiter said yes and walked away, but in fact he didn't help you to urge.

The above two methods, if conditions permit, both the front and back ends should be done synchronously, such as the SMS verification code mentioned above, sometimes after we click the send button and the countdown appears, refresh the page and find that the send button can be clicked again, but again When clicking, the system prompts that the verification code is sent too frequently. This is the effect of the front and back ends. As for why the user has to wait for such a long time for 60 seconds, in addition to the consideration of system performance, another reason is Cost, if there are a large number of users on the platform, if a user sends a text message every time they click, then each user inadvertently clicks a few more times, and the platform may have to pay a huge amount of text messages.

The second scenario is commonly seen in flash sales, or when everyone grabs red envelopes on the platform during the Chinese New Year. A large number of users flood into the platform and send the same request almost at the same time. This scenario not only tests the system Stability also tests the performance of the server. We sometimes hear R&D talk about the number of concurrency per second, which refers to the maximum number of simultaneous requests that the system can handle.

To deal with the concurrency in this scenario, the first thing is to improve the performance of the server and support more concurrency. This is why we often see some big platforms, before doing activities on large holidays, sometimes announce that after upgrading the server and improving server performance These methods are all for the server to better cope with traffic shocks during the next event. Of course, there is a cost for server upgrade and expansion, which must be determined according to the actual situation of the platform. If the maximum number of concurrency of the platform is 100,000, and you increase the performance of the server to withstand 1 million concurrent numbers, then it is unnecessary.

On the other hand, the system also needs to add optimization strategies to some operations that are prone to high concurrency. For example, we sometimes encounter that when the seckill is in progress, the system prompts that it is queuing. In fact, the system delays sending your request to the server. Put your requests in a queue and send them in order, which can effectively prevent the server from being overwhelmed by receiving too many requests at once.

Guess you like

Origin blog.csdn.net/qq_41661800/article/details/130103002