I'm too hard! Have you encountered these interview questions?

1. Is there a lot of data access for your project?

According to the actual situation of the project, here is a chestnut for everyone.

If traditional projects are not like sass, they are applied to enterprises. The users are generally around 1000. Concurrency is rare. Generally, they can be processed through redis cache and threads.

For Internet projects, the average daily life of similar projects is about 1W, and the concurrency is between 200 and 500. Considering the increase in users in the later period, the processing plan is relatively extensive.

DNS, service front-end, back-end, cache optimization, thread pool, nginx, page static, cluster, high-availability architecture, read-write separation, etc. There are many options, which can be selected according to the actual needs of the project.

2. Which piece of processing is used for redis?

As we mentioned above, redis is mainly used for caching. Such as the storage of files and information, different data types are applied according to different business scenarios, and the following question 4 is also solved by the way. There is a specific explanation in the previous article, so I won't repeat it here.

Secondly, redis also involves: redis cluster construction, avalanche, penetration, consistency, separation of reads and writes, master/backup, expiration time setting, comparison between non-relational databases of the same type, and some may ask how data is stored Can be combined with the above types to explain, basically these points.

3. What to do if redis crashes?

Reasons for the avalanche:

A popular and simple understanding of cache avalanche is: because the original cache is invalid (or the data is not loaded into the cache), the new cache has not expired (the cache is normally obtained from Redis, as shown in the figure below), all requests that should access the cache are to query the database It will cause huge pressure on the database CPU and memory, which will seriously cause database downtime and system crash.

The basic solution is as follows:

First, most system designers consider using locks or queues to ensure that there will not be a large number of threads reading and writing to the database at one time, so as to avoid too much pressure on the database when the cache fails, although it can be To a certain extent, the pressure on the database is relieved, but at the same time the throughput of the system is reduced.

Second, analyze the user's behavior and try to distribute the cache invalidation time evenly.

Third, if it is because a cache server is down, you can consider to be the main backup, such as redis master and backup, but the double cache involves the issue of update transactions, update may read dirty data and needs to be resolved.

Redis avalanche effect solution:

1. Distributed locks can be used, local locks for stand-alone version

When there are a large number of requests to the database server suddenly, request restriction is performed. Use all the mechanisms to ensure that there is only one thread (request) operation. Otherwise, wait in line (cluster distributed lock, stand-alone local lock). Reduce server throughput and low efficiency.

It is guaranteed that only one thread can enter. In fact, only one request can perform the query operation.

You can also use the current limiting strategy here.

2. Message middleware method

If a large number of requests are accessed and Redis has no value, the result of the query will be stored in the message middleware (using the MQ asynchronous step feature).

3. Primary and secondary cache Redis+Ehchache

4. The expiration time of the Redis key is allocated equally

Set different expiration times for different keys to make the time points of cache invalidation as even as possible.

4. Which data types do you often use for redis?

String-string, Hash-dictionary, List-list, Set-set, Sorted Set-ordered set

5. How to ensure the consistency of redis cache?

1. The first scheme: adopt the delayed double deletion strategy

Perform the redis.del(key) operation before and after writing the library, and set a reasonable timeout period.

The pseudo code is as follows

public void write(String key,Object data){
redis.delKey(key);
db.updateData(data);
Thread.sleep(500);
redis.delKey(key);
}

2. The specific steps are:

1) Delete the cache first

2) Write the database again

3) Sleep for 500 milliseconds

4) Delete the cache again

So, how is this 500 milliseconds determined, and how long should I sleep for?

Need to evaluate the time-consuming business logic of reading data of your own project. The purpose of this is to ensure that the read request ends and the write request can delete the dirty data in the cache caused by the read request.

Of course, this strategy also considers the time-consuming synchronization of redis and database master-slave. The final sleep time of writing data: add a few hundred ms to the time-consuming business logic of reading data. For example: sleep for 1 second.

3. Set the cache expiration time

In theory, setting an expiration time for the cache is a solution to ensure eventual consistency. All write operations are based on the database. As long as the cache expiration time is reached, subsequent read requests will naturally read the new value from the database and then backfill the cache.

4. Disadvantages of the program

Combined with the double deletion strategy + cache timeout setting, the worst case is that there is inconsistency in the data within the timeout period, and the time-consuming write request is increased.

2. The second scheme: asynchronously update the cache (based on the synchronization mechanism of subscribing to binlog)

1. Overall technical thinking:

MySQL binlog incremental subscription consumption + message queue + incremental data update to redis

1) Read Redis : The hot data is basically in Redis

2) Writing MySQL : Additions, deletions, and modifications are all operations on MySQL

3) Update Redis data : MySQ data operation binlog, to update to Redis

2. Redis update

1) Data operations are mainly divided into two major blocks:

One is full (write all data to redis at once)

One is incremental (updated in real time)

Here is the increment, referring to the mysql update, insert, delete change data.

2) After reading the binlog, analyze and use the message queue to push and update the redis cache data of each station.

In this way, once a new write, update, delete and other operations are generated in MySQL, binlog-related messages can be pushed to Redis, and Redis will update Redis according to the records in the binlog.

In fact, this mechanism is very similar to the master-slave backup mechanism of MySQL, because the master-slave backup of MySQL also achieves data consistency through binlog.

Here you can use canal (an open source framework from Ali), through which you can subscribe to MySQL binlog, and canal imitates the backup request of MySQL's slave database, so that Redis data update achieves the same effect.

Of course, you can also use other third parties for the message push tool here: kafka, rabbitMQ, etc. to push and update Redis.


6. For example, if you customize a property in springboot, how to reference it in a bean?

1. Under the packages that Spring Boot can scan

The tool class written is SpringUtil, which implements the ApplicationContextAware interface, and adds Component annotations to let Spring scan the bean

2. Not under the scan package of Spring Boot

This situation is also very simple to deal with, first write the SpringUtil class, and also need to implement the interface: ApplicationContextAware

7. How does JWT execute?

image
image

Mainly pay attention to its authentication and authorization, and secondly, the invalidation of long tokens and short tokens can also be viewed simultaneously.

8. During registration, due to network fluctuations, the user clicked repeatedly, and the database did not verify the unique index. How do you deal with it? The user sends two requests, how do you deal with it?

First intercept it, and set up multiple requests within 1 minute to be processed only once;

Put it in the message queue;

No unique index verification, this part can be queried from the cache first, without going to the database;

Two requests are concurrent operations:

Threads have their own distribution types, lock mechanism applications, redis caches, queues, etc., which can be processed, and can be explained in detail according to their scenarios.

9. Have you read the spring source code? What is the injection process of a bean?

At this time, we will test our basic skills.

First create a bean class, where @Configurationannotations and @ComponentScanannotations are necessary, if you write a prefix, then @ConfigurationPropertiesannotations are required ;

Then inject properties in the configuration file, such as application.propertity.

10. How do springboot want to write a bean and inject it into the IOC container?

The first way: add @Serviceor @Componentwait for annotations of the bean class to be injected

The second way: use @Configurationand @Beanannotation to configure the bean to the ioc container

The third way: use @Importannotations

The fourth way: springboot automatic assembly mechanism

11. How does mybatis prevent SQL injection? Why ## can prevent SQL injection?

# {} is pre-compiled and is safe ; $ {} is not pre-compiled, it just takes the value of the variable, it is not safe, there is SQL injection;

PreparedStatement.

12. How are the vue and back-end interactive interface documents handled?

The investigation should be the understanding of vue, the interaction between the front and back ends. You can start with some common application methods of vue.

image
image

The above is a list of some parameter methods, which can be used as a reference.

13. Have you used AOP? How do you record the time consumption of each interface?

First, you need to create a class, and then add two annotations to the class name

@[email protected]

The @Component annotation is to allow this class to be managed by spring as a bean, and the @Aspect annotation is to indicate that this class is an aspect object.

The annotation meaning of each method in the class is as follows:

@Pointcut is used to define the matching rules of the aspect. If you want multiple colleagues to match, you can use || to

The two rules are connected, you can refer to the above code for details

@Before  目标方法执行前调用
@After  目标方法执行后调用

@AfterReturning is called after the target method is executed, you can get the return result, the execution order is after @After

@AfterThrowing  目标方法执行异常时调用

@Around calls the actual target method, you can do some operations before the target method is called, or you can

Do some operations after the call. Use scenarios include: transaction management, permission control, log printing, performance analysis, and so on.

The above is the meaning and function of each annotation. The two important annotations are the @Pointcut and @Around annotations. @Pointcut is used to specify the aspect rules and decide where to use this aspect; @Around will actually call the target method, so that you can Do some processing before and after the target method is called, such as things, permissions, logs, and so on.

It should be noted that the order of execution of these methods:

Before executing the target method: first enter around, then before. After the
target method is executed: first enter around, then after, and finally afterreturning

The actual log information is as follows, you can see the execution order of each method:

image

In addition, to use spring aop, you need to add the following line of configuration to the spring configuration file to enable aop:

<aop:aspectj-autoproxy/>

At the same time, the dependent jar package needs to be added to maven:

<dependency>   <groupId>org.aspectj</groupId>   <artifactId>aspectjrt</artifactId>   <version>1.6.12</version></dependency><dependency>   <groupId>org.aspectj</groupId>   <artifactId>aspectjweaver</artifactId>   <version>1.6.12</version>

</dependency>

To sum up, Spring AOP actually uses dynamic proxy to process the aspect layer uniformly. The dynamic proxy methods are: JDK dynamic proxy and cglib dynamic proxy. JDK dynamic proxy is implemented based on interfaces, and cglib dynamic proxy is implemented based on subclasses. Spring uses JDK dynamic proxy by default. If there is no interface, spring will automatically use cglib dynamic proxy.

14. Do you understand design patterns? What design patterns have you used?

Singletons, factories, agents, decorators, etc., can be explained in conjunction with the actual application in the project.

15. How does mybatis visit mapper?

When mybatis is running, it must first load the core configuration file, that is, the mybatis.xml file, through resources, and then parse it through xmlConfigBulider. After the analysis is completed, put the result into the configuration and pass it as a parameter to the build() method. And return a defaultSQLSessionFactory. We then call the openSession() method to obtain the SqlSession. While constructing the SqlSession, transaction and executor are also needed for subsequent execution operations.

image

16. Are transactions used in the code? Is it an annotation transaction or a programming transaction?

It is generally realized by using annotations. For the specific realization, please refer to the explanation of yesterday's interview questions.

17. Is the project online?

Since traditional projects cannot be seen directly, you can prepare some screenshots, bring them during the interview, and actually do PPT presentations;

It is also good to explain the Internet projects directly when they are ready for the interview on the mobile phone.

Here is one point to tell us, prepare for the interview in advance. What do I want to say, how do I say it.

Consider several points: what is, why, how to do it, and harvest.