[Translation] Why you should always use the "var" keyword in C #

[Translation] Why You Should Always Use the 'var' Keyword in C # (why you should always use the "var" keyword in C #)

Using the ‘var’ keyword in C# has always spurred a hot debate among developers. I believe ‘var’ should be used at all times. I believe this not because I choose to be “lazy,” as those who argue against it frequently claim. Of all the reasons I use ‘var’, laziness is not one of them.

In C #, use varthe keyword has been hotly controversial developer, I think it should always be used var. I believe this is not because I chose to become lazy , as people often claim against it as, I use varall the reasons, the lazy not one of them.

I’ve argued for the constant use of ‘var’ countless times; this blog post is a collection of thoughts that I have compiled resulting from my arguments. Below are my reasons for using ‘var’ all of the time.

I advocate the use of all the time var, this blog is my own collection based on the idea of sorting out the arguments, the following is what I have been using varfor a reason.

original

It decreases code-coupling (coupling reduction codes)

Coupling between code and its dependent code can be reduced by using ‘var’. I do not mean coupling from an architectural perspective nor at an IL-level (the type is inferred anyway), but simply at the code level.

Use varcan reduce the coupling between the code and the code is dependent, I mean, from the perspective of not coupled architecture, nor in the level of IL (however inferred type), but in the code level.

Example (Example)

Imagine there are 20 explicit type references spanning over twenty code files to an object that returned an another object of type IFoo. By explicit type references, I mean by prefacing each variable name with IFoo. What happens if IFoo changes to IBar, but the interface’s methods are kept the same?

Suppose there are 20 types of explicit references, spanning more than 20 code file, a return type IFooof object. By explicit reference type, I mean plus in front of each variable name IFoo, if you IFoochange the IBarmethod interface remains the same, what will happen?

Wouldn’t you have to change it in 20 distinct places? Doesn’t this increase coupling? If ‘var’ was used, would you have to change anything? Now, one could argue that it is trivial to change IFoo to IBar in a tool like ReSharper and have all of the references changed automatically. However, what if IFoo is outside of our control? It could live outside the solution or it could be a third-party library.

Whether in 20 different places on the line modification? Is not that what increases the coupling? If you use var, what you need to modify it? Now, one might argue that, in a tool like ReSharper IFooamended as IBarunimportant, and can be AutoCorrect all references. However, if IFoobeyond our control it? it may be present in addition to solutions that can be third-party libraries.

It is completely redundant with any expression involving the "new" operator (which for any expression involving "new" operator is completely superfluous)

Especially with generics:

Especially in generics:

ICalculator<GBPCurrency, GBPTaxType> calculator = new GBPCalculator<GBPCurrency, GBPTaxType>();

can be shorted to:

It can be abbreviated as follows:

var calculator = new GBPCalculator<GBPCurrency, GBPTaxType>();

Even if calculator is returned from a method (such as when implementing the repository pattern), if the method name is expressive enough it is obvious the object is a calculator. The name of the variable should be expressive enough for you to know what the object it represents is. This is important to realize: the variable expresses not what type it represents, but what the instance of that type actually is. An instance of a type is truly an object, and should be treated as such.

Even if the calculator is returned from the method (for example, when implementing storage mode), if the method name has sufficient expressive power, the object is clearly a calculator variable names should be sufficiently expressive, people know it expresses What object is to recognize that this is very important: it represents not the type of variable represents, but instances of this type is in fact what is really an instance of a type of an object and should be treated like this.

There is a distinction between an object and its type: an object exists at runtime, it has properties and behaviors; types simply describe what an object should be. Knowing what type an object should be simply adds more noise to the source code, distracting the coder from what an object really is.

There is a difference between the object and its type: the properties and behavior of an object exists in the running, simply type representation of what should be the object, the object should know what type, will only lead to more interference to the source code so that the developer is distracted, you can not really understand what objects Yes.

An object may be brought into this world by following the rules governed by a type, but this is only secondary information. What the object actually is and how it behaves is more important than its type. When we use an object at runtime, we are dependent on its methods and properties, not its type. These methods and properties are an object’s behaviors, and it is behaviors we are dependent upon.

Objects can be introduced by following the rules controlled by the type, but this is only the auxiliary information, the object actually is and what their behavior is more important than its type. When we use an object at run time, depending on its methods and properties, rather than its type. these methods and properties are the object of conduct, which we rely on behavior.

The argument for knowing a variable's type has been brought up in the past. The move from Hungarian notation in Microsoft-based C++ to non-Hungarian notation found in C# is a great example of this once hot topic. Most Microsoft-based C++ developers at the time felt putting type identifiers in front of variable names was helpful, yet Microsoft published coding standards for C# that conflicted with these feelings.

In the past been proposed to understand the variable types of disputes, law from naming based on Microsoft's C ++ in Hungary to change C # non-Hungarian notation, is a good example of this has been a hot topic of the time, most based on Microsoft C ++ developers believe that before the variable type identifier on is helpful, however, with the release of Microsoft's C # coding standards conflict with it.

It was a major culture change and mind-shift to get developers to accept non-Hungarian notation. I was among those who thought that non-Hungarian variable naming was downright wicked heresy and anyone following such a practice must be lazy and did not care about their profession. If knowing a variable’s type is so important, shouldn’t we then preface variable names in Hungarian style to know more information about an object's type?

Allows developers to accept non-Hungarian notation is a major culture change and shift in thinking. I used to think that anyone follow Africa and Hungary variable nomenclature must be lazy, no matter what profession, is downright evil heresy. If You know the type of the variable is very important, so we do not come at the Hungarian notation style prefix variable names for more information about the types of objects it?

You should not have to (do not have to be concerned about the type of object) care what the type of an object is

You should only care what you are trying do with an object, not what type an object may come from. The methods you are attempting to call on an object are its object contract, not the type. If variable names, methods, and properties are named appropriately, then the type is simply redundant.

Should focus on using the object what to do, rather than focus on what type of objects from the method attempts to call the object is the object of the contract, rather than type. If made appropriately named, variable names, methods and properties, the type is redundant .

In the previous example, the word "calculator" was repeated three times. In that example, you only need to know that the instance of a type (the object) is a calculator, and it allows you to call a particular method or property.

As in the previous example, the word calculatoris repeated three times, the example, only needs to know the type of instance (object) is a calculator(calculator), and it allows call a specific method or property.

The only reason a calculator object was created was so that other code could interact with its object contract. Other code needs the calculator’s methods and properties to get something done. This need has no dependency on any type, only on an object’s behaviors..

Creating calculator(calculator) object only reason is that other code can interact with their target contract. Other code needs calculator methods and properties to accomplish some work. This demand does not depend on any type, it depends only on the behavior of objects.

For example, as long as the object is a calculator, and the dependent code needs to call a method named ,” then the dependent code is coupled to an object with a method called “CalculateTax” and not a specific type. This allows for much more flexibility, because now the variable can reference any type as long as that type supports the “CalculateTax” method.

For example, dependent code needs to call a " CalculateTax" method, as long as a target calculator(a calculator), then the tag will depend to a called CalculateTaxcoupling method (instead of a particular type) on an object, which allows for greater flexibility, because now variable can refer to any type, as long as the type of support CalculateTaxmethod.

'Var' is less noisy than explicitly referencing the type (explicit reference type as compared to, "var" of less noise)

As programming languages evolve, we spend less time telling the compiler and the computer what to do and more time expressing problems that exist in the specific domain we are working in.

With the development of programming languages, we spend less time telling the compiler and the computer what to do, but spend more time to express problems in specific areas of our work.

For example, there are a number of things in C++ that are very technical with respect to the machine, but have nothing to do with the domain. If you are a customer of Quicken or Microsoft Money, all you really want to do is manage your finances better. These software packages allow you to do that.

For example, in C ++, there are many computer-related technical issues, but regardless of the particular area. If you are a Quicken (a family and personal financial management software) or Microsoft Money users, you really want to do is be better manage your finances. the software is what allows you to do so.

The better a software package can do this for you, the more valuable it is to you. Therefore, from a development perspective value is defined by how well a software package solves a user's problem. When we set out to develop such software, the only code that is valuable is the code that contributes to solving a particular user’s problem. The rest of the code is unfortunately a necessary waste, but is required due to limitations of technology.

A software for you to do better, the more valuable it to you. Therefore, from a development point of view, the value of the software is the ability to solve problems of user defined. When we started to develop such software, the only the code value of the code is to help users solve specific problems. Unfortunately, the rest of the code is necessary to waste, which is necessary due to technical limitations.

If we had infinite memory, we would not need to worry about deleting pointers in C++ or garbage collection in C#. However, memory is a limitation and therefore the technician in us has to find ways of coping with this limitation.

If our memory is unlimited, you do not have to worry about deleting garbage collection pointers in C ++ or C #, however, the memory is limited, so our technical staff must find ways to deal with this limitation.

The inclusion of ‘var’ into the C# language was done for a reason and bookmarks another iteration of C# (particularly C# 3.0). It allows us to spend less time telling the compiler what to do and more time thinking about the problem we are trying to solve.

The "var" is included in C # language for a reason, C # marks an innovation (especially C # 3.0). It allows us to spend less time telling the compiler what to do, we spend more time thinking about issues that need resolving.

Often I hear dogma like "use var only when using anonymous types." Why then should you use an anonymous type? Under these conditions you usually do not have a choice, such as when assigning variables to the results of LINQ expressions. Why do you not have a choice when using LINQ expressions? It's because the expression is accomplishing something more functional and typing concerns are the least of your worries.

Often hear this dogma: "can only be used when an anonymous type can var." So why do you want to use an anonymous type leeway in these cases, usually no choice, for example, when you assign a variable to the results of a LINQ expression?. Why not have a choice when using LINQ expression? this is because the expression to achieve some of the more useful features, and the type of problem is most do not need to worry about.

In the ideal C# world, we would not have to put any words in front of a variable name at all. In fact, prefacing a variable with anything just confuses the developer even further, and allows for poor variable names to become a standard whereby everyone is reliant upon explicit type references.

In an ideal C # programming environment, we absolutely does not need any word in the variable name prefix. In fact, the tag prefix anything will only make developers more confused, and let each become a bad variable name developers are explicitly rely on standard reference.

Arguments against using 'var' (argument against the use of "var" of)

Some of the arguments I have heard against using ‘var’ and my responses to these are:

I saw some opposition to the use varof the argument, this is my answer:

  • “It reduces clarity” – How? By removing the noise in front of a variable name, your brain has only one thing to focus on: the variable name. It increases clarity.

"It reduces clarity" - Why by eliminating noise in front of the variable, the brain just need to focus on only one thing: variable name it adds clarity?..

  • “It reduces readability and adds ambiguity” – This is similar to #1: readability can be increased by removing words in front of the variable and by choosing appropriate variable names and method names. Focusing on type distracts you from the real business problem you are trying to solve.

"It reduces the readability, while increasing ambiguity" - similar to the first: readability can delete variables by previous word, and select the appropriate variable names and method names to increase the focus on the type will be distributed to attention to real business problems to be solved.

  • “It litters the codebase” – This is usually an argument for consistency. If your codebase uses explicit type references everywhere, then by all means do not use ‘var’. Consistency is far more important. Either change all explicit references in the codebase to ‘var’ or do not use ‘var’ at all. This is a more general argument that applies to many more issues, such as naming conventions, physical organization policies, etc.

"It makes the code base becomes a mess" - this is probably the reason if the consistency of the existing code base has been used to display the type of reference, then do not use. varConsistency is more important or all of the code reference library display is revised to read var, either never use var. this is a more general point of view, applicable to many situations, such as naming conventions, the physical file organizational policies and so on.

As a final thought, why do we preface interface names with “I” but not class names with “C” as we did in the days when Microsoft-C++ was the popular kid in school?

The last thought, when Microsoft C ++ popular in school, why should we in the interface name prefix "I" and not in the class name prefix "C"?

Guess you like

Origin www.cnblogs.com/xixixiao/p/why-you-should-always-use-var-keyword.html