I’ve noticed a worrying trend of late, when looking at code written by developers who are new to C#, or have never worked with the language prior to C# 3.0. I am referring to the misuse and overuse of the var keyword.
The purpose of var, for those who don’t know, is to omit the type name when declaring a local variable in situations where the type name is unknown, unavailable or doesn’t exist at the point where the code is written. The primary case where this is true is for anonymous types, whose type name is provided at compile-time. It is also used in LINQ where the result of a query cannot easily be inferred by the programmer, perhaps because it uses grouping structures, nested generic types or, indeed, anonymous types as well.
There seems to be a tendency for some programmers to use var for every variable declaration. Sure, the language doesn’t stop you from doing this and, indeed, MSDN admits that this is a “syntactic convenience”… But it also warns quite strongly that:
…the use of var does have at least the potential to make your code more difficult to understand for other developers. For that reason, the C# documentation generally uses var only when it is required.
Implicitly Typed Local Variables (C# Programming Guide), MSDN
I discovered recently that the commonly-used tool ReSharper practically mandates liberal use of var. Frankly, this isn’t helping the situation. There are some developers who try to argue the stance that var somehow improves readability and broader coding practices, such as this article:
By using var, you are forcing yourself to think more about how you name methods and variables, instead of relying on the type system to improve readability, something that is more an implementation detail…
var improves readability, Hadi Hariri
I agree with the premise of the quote above, but not with the end result. On the contrary, the overuse and misuse of var can lead to some very bad habits…
Let’s look at the argument against the widespread use of var (and for its sparing, correct use):
Implicitly-typed variables lose descriptiveness
The type name provides an extra layer of description in a local variable declaration:
// let's say we have a static method called GetContacts() // that returns System.Data.DataTable var individuals = GetContacts(ContactTypes.Individuals); // how is it clear to the reader that I can do this? return individuals.Compute("MAX(Age)", String.Empty);
My variable name above is perfectly descriptive; it differentiates between any other variables populated using GetContacts() and indeed other variables of type DataTable. When I operate on the variable, I know that it’s the individual contacts that i’m referring to, and that anything I derive from them will be of that context. However, without specifying the type name in the declaration, I lose the descriptiveness it provides…
// a more descriptive declaration DataTable individuals = GetContacts(ContactTypes.Individuals)
When I come to revisit this body of code, i’ll know not only what the variable represents conceptually, but also its representation in terms of structure and usage; something lacking from the previous example.
‘var’ encourages Hungarian Notation
If the ommission of type names from variable declarations forces us to more carefully name our variables, it follows that variable names are more likely to describe not only their purpose, but also their type:
var dtIndividuals = GetContacts(ContactTypes.Individuals);
This is precisely the definition of Hungarian Notation, which is now heavily frowned upon as a practice, especially in type-safe languages like C#.
Specificity vs. Context
There’s no doubt that variable names must be specific, however, they need never be universally-specific. Just as a local variable in one method doesn’t need to differentiate itself from variables in other methods, a declaration that includes one explicit type need not differentiate itself from variables of a different explicit type. Implicit typing with var destroys the layer of context that type names provide, thus it forces variable names to be specific regardless of type:
// type provides context where names could be perceived as peers Color orange = canvas.Background; Fruit lemon = basket.GetRandom(); //... // this is far less obvious var orange = canvas.Background; var lemon = basket.GetRandom(); // you can't blame the programmer for making this mistake SomeMethodThatOperatesOnFruit(orange);
Increased reliance on IntelliSense
If the type name is now absent from the declaration, and variable names are (quite rightly) unhelpful in ascertaining their type, the programmer is forced to rely on IDE features such as IntelliSense in order to determine what the type is and what methods/properties are available.
Now, don’t get me wrong, I love IntelliSense; I think it’s one of the most productivity-enhancing features an IDE can provide. It reduces typing, almost eliminates the need to keep a language reference on-hand, cuts out many errors that come from false assumptions about semantics… the list just goes on.
Unfortunately, the ultimate caveat is that IntelliSense isn’t universally available; you can write C# code without it, and in some cases I think that programmers should! Code should be easily-maintainable and debuggable in all potential coding environments, even when IntelliSense is unavailable; and implicitly-typed variables seriously hinder this objective.
No backwards compatibility
One of the advantages of an object-oriented language like C# is the potential for code re-use. You can write a component and use it in one environment (e.g. WPF, .NET 3.5), then apply it in another (e.g. ASP.NET 2.0). When authoring such components, it’s useful to be aware of the advantage of that code working across as many versions of the language and framework as possible (without impeding functionality or adding significant extra code, of course).
The practice of using var for all local variable declarations renders that code incompatible with C# 2.0 and below. If var is restricted to its intended use (i.e. LINQ, anonymous types) then only components which utilise those language features will be affected. I’ve no doubt that a lot of perfectly-operable code is being written today that will be useless in environments where an older version of the framework/language is in use. And believe me, taking type names out of code is a hell of a lot easier than putting type names back in to code.
Final Words
I sincerely hope that people will come away from this article with a better understanding of the purpose of the var keyword in C#, when to use it and, more importantly, when not to use it. As a community of developers, it’s important to encourage good practices and identify questionable ones; and I believe that the overuse of var is certainly one such questionable practice.
Why should I write code for obsolete enviornments that don’t have intellisense?
I don’t publish my code, so it is only going to be read on my machine, with great intellisense. Likewise, I have no intention of supporting prior versions of the framework, and it is not clear that there is a compelling reason for me to do so.
Your quote from MDSN is inappropriate. MSDN is documentation. It IS almost always read without intellisense, and it is read to understand details of an API rather than the flow of an algorithm. If I were writing documentation, I would avoid var.
Using var everywhere (as I do) makes refactoring easier and allows me to focus on the meaning of my code, which is rarely dependant on the exact type of the object involved. (Alternatively, if the semantics of a named method depend on the type, then something is wrong with the design, not the use of var.)
The author is clearly a non var fanboy.
The data type declarations are just noise unless you’re dealing with types like int, float, double, decimal etc.
Did you even read the whole article?
Intellisense isn’t the issue, it’s code readability. To determine what type “var” is, one must hover over the method to see what the return object is.
Whilst I agree ReSharper’s default suggestion of var even for ints, etc is silly (I have no problem with people electing to code that way, but it’s a silly recommendation imo) – it has a nice in-between setting to recommend var only when the type doesn’t appear on the right hand side.
I don’t think any real criticism can be made of using var for those instances (which are most variable declarations I find). What’s the advantage in typing the same type on both sides of the assignment operator for casts/new operators?
I agree that the presence of a cast or new operator mitigates the readability problem, but it still doesn’t change the fact that the intended purpose of var was to provide a way to declare a variable of a compiler-generated type whose name would not be available at the point of coding. If you know the type name, why omit it? The var keyword was never meant to substantially change syntax or coding practices in C#, and yet many programmers see it as a reason to.
The argument that we should never use a feature in a new way beyond what it was originally intended for seems short sighted to me. Yes, it is important to understand the original intent, but it is also reasonable to expand on the original functionality if there is a good reason to do so.
I agree, there are many uses for var that I find to be detrimental, but the scenario that Alex Davies is perfectly valid. There is no reason to be overly verbose and redundant in that case, when it gains the reader and the developer exactly nothing.
Here Here!
My two cents. I hate var. It shows the uter lack for intelligence and laziness of a developer. If you know what type it is declare it. If you can’t figure out what type it is then you probably shouldn’t be writing code.
Using var makes code less readable. I look at a definition to see it’s type and to guarantee it’s type.
Let’s look at this example:
var i = 0;
That declaration says nothing about the type and what I can expect from it. If you don’t know what you are doing and using large counts you will overflow it. I hope all of you that use var do overflow it and it causes a nasty bug for you. You deserve it. If you are lazy and want to shoot yourself in the foot then go ahead.
Sorry but there are times where the typename itself is very long. Writing the full typename every time just creates an unreadable mess. It is not laziness if you aim to create readable code!
Using var makes imo perfect sense when you dont care about the type.
I ALWAYS care about the type. And even if I don’t the guy who comes after me does. Var may be short, but it’s also self-obfuscating. I wouldn’t hire a coder who uses var liberally. It’s reflective of a shortcut mindset. I understand the good intentions associated with thinking less code is cleaner code, but it’s not. Clear, obvious code is clean code.
“Sorry but there are times where the typename itself is very long.”
If your type name is very long, then maybe you should be better at defining your types, instead of resorting to lazy shortcuts.
The only people who think long words make an unreadable mess, are people who struggle to read long words. Var is a lazymans term, because modern programmers are lazy, which is why modern software is generally rubbish in quality.
I whole-heartedly agree. var is poison. It was provided solely to allow the use of anonymous types, or types which are difficult to work out (pretty much only in LINQ). I should be able to look at a code snippet and have a good idea what it does, without knowing too much context. If a code snippet is littered with var, I need to refer to several other files to work out what it does. Yes I could use intellisense, but what if I’m looking at code online, on a source server, in an open source project, on someone’s blog, etc? Using var is lazy. Lazy programmers are bad programmers. Thus, using var means you are a bad programmer. 😉
Until now, I have only be using var for LINQ, etc. However, I recently saw a demo from Ander Hejlberg, the lead architect of C# where he was using it right and left. So, I’m switching over to var.
Just because a popular name in programming uses it means it’s the best practice? That’s retarded. var is an overused atrocity that has led to too many bugs and pointless arguments ( like this one ). C# was boasted as a type-safe language and by using var outside its intended functionality is pretty much breaking that safety.
When my house was built, the builders buried the waste in my front, sides, and back yard. When I talked to others about this, I found out that this is common practice. What does this mean? This means that the workers were lazy, got overpaid for their work, left a mess for someone else to clean up and reflects their professionalism. I’ll never use that builder or his workers and will tell people I will never recommend them.
That’s how us devs are viewed at times; lazy, incompetent and overpaid. The competition is enormous out there and I wouldn’t want to be weeded out because I made someone’s life harder or viewed as lazy.
For the most part I cannot fault anyone for not using var. I also don’t agree that they should be used whenever possible. However I think there is a case to be made for using them to shorting declares. For example:
var foo = new Dictionary<string,SomeBigHonkingNameForAGenericClass>();
is much more readable than having the variable name in the middle.
My rule of thumb is does it increase readability?
Do it make it easier or more difficult for the code reviews who will be reading my code without the aid of intellisense?
even
var contact = contactDatabase.LookupByName(name);
would be OK and would not be any less readability than
Contact contact = contactDatabase.LookupByName(name);
in both cases I know I am getting a Contact object back. Neither gives more or less information about Contact.
but
var data = database.GetById(352);
would be very poor.
Exactly my point.
Using var without thinking about it is bad. When you use it to increase readability however, it can be a great thing
“var contact = contactDatabase.LookupByName(name);”
You have no idea what your object type coming back is.
You don’t need to. The important stuff is that it is a contact. Just think when you’re naming your variables and you’ll be ok.
@gordon: The best programmers are lazy programmers. They avoid repetition at all cost, and come up with re-usable composable solutions so that they don’t have to do the same work twice.
Yes! Avoiding repetition (what ‘some’ call lazy) is the mother of invention. Lazy programmers go out of their way (ironically) to not have to repeat and instead invent. I’d rather invent than be a typist, we aren’t writing essays, there is no word count to hit.
I do not think using the term var instead of writing “str” and hitting tab will free up so much time that your code will somehow become far better and you can finish work early and make a car that runs off energy it produces by running.
Since we’re being lazy, we should also stop writing comments. (sarcasm) Comprehensible code should be our goal, not smaller code files. If anything, we need more typing so other devs know what’s going on after we’ve moved on to new things. If you find yourself coming back to your code 6 months later, you’ll thank yourself for how well documented something is.
Great article, I have seen developer even using “var” for “bool” type.
So what? If the variable is called something like isOpen then a type annotation is redundant, and if the variable name doesn’t reflect the fact that it is a bool then the programmer is incompetent.
So isOpen must be bool, but not bool?
I consider this issue to separate good programmers (and thinkers) from bad. var should be used widely because it’s a form of *decoupling* and DRY … the type of a thing should be established at its definition, with as few other mentions as possible. As for Hungarian, you just don’t understand the problem with it, which again is about *coupling*. There is nothing wrong with the original “apps” Hungarian, only with the later botched “systems”” Hungarian that encoded specific datatypes like dw rather than conceptual/functional types. In your example, both
var dtIndividuals = GetContacts(ContactTypes.Individuals);
and
DataTable individuals = GetContacts(ContactTypes.Individuals);
encode the capabilities of “individuals”; the Hungarian version is no worse for that than the explicit type declaration.
But what is the point of that if we are dealing with a type-safe language/environment? I can see that being true in languages like python, but C#? Types matter in C#. It’s fine if you want to question the whole concept of type-safe, but then you are arguing languages, not C# coding styles.
var is type safe. Using var has no bearing on type safety.
And types matter in all languages. In fact, they matter far more in dynamic languages because, if you don’t pay close attention to them and get them all right, your program crashes at runtime, whereas in statically typed languages a type error is usually just a typo that the compiler tells you about. The major exception is int vs. float types, especially because / means different things for ints than for floats (an old but poor design).
Many people who object to var have major misunderstandings like this.
Let’s look at this example: var i = 0; That declaration says nothing about the type and what I can expect from it.”
That’s because the variable doesn’t have an informative name — that’s bad programming, but is irrelevant to var.
“I hope all of you that use var do overflow it and it causes a nasty bug for you. You deserve it.”
That is a vile attitude and no one who utters such a thing has any credibility.
“It was provided solely to allow the use of anonymous types”
This simply isn’t true. While anonymous types necessitated var (and auto in C++11), type inference is a modern concept promoted by languages like Haskell and Scala, and this had a significant influence on introducing it into C# and C++.
“Using var is lazy. Lazy programmers are bad programmers.”
Using var is concise, and consistent with DRY. By this ridiculous argument (no less invalid by adding a smiley), only programmers who code in binary and spend weeks squeezing out every last bit are good programmers. Of course this is wrong — programmers who don’t understand why it’s good to use var (in many many cases) are bad programmers.
“Just because a popular name in programming uses it means it’s the best practice?”
Nothing was said about “popular”, which indeed would be irrelevant. But being “the lead architect of C#” is very relevant to best practice of that language.
“C# was boasted as a type-safe language and by using var outside its intended functionality is pretty much breaking that safety.”
var is completely typesafe. This statement shows a failure to understand the concept.
Thanks for your reply to that nonsense. Saves me the typing.
As I’ve read in numerous places now, the original purpose of the “var” keyword was not to reduce character count, but to deal with anonymous types and later LINQ queries.
What “var” is being used for today instead seems to be to reduce statements of this nature:
{VariableType} {Variable} = new {VariableType}();
To this:
var {Variable} = new {VariableType}();
That’s all well and good if someone wants to code that way, because C# is still a strongly typed language and you could still, given enough time or the right tools, figure out what type the variable is. But the problem is, this is not the purpose of var. A better feature (and one that I think should be added to alleviate this coding-standards-war) would be:
{VariableType} {Variable} = new();
Any time you use a variable, you need to know what it is, so I don’t see the value in trying to obfuscate the type. I do, however, see the value in making simpler statements than the following:
Dictionary<List<Dictionary>,List> myComplicatedVariable = new Dictionary<List<Dictionary>,List>();
But at some point, you have to define the type and it might as well be obvious:
Dictionary<List<Dictionary>,List> myComplicatedVariable = new();
That’s an interesting syntax you’ve proposed. Unfortunately, it breaks down when polymorphism comes into the equation, e.g:
BaseType obj;
if (someCondition)
obj = new DerivedType1();
else if (someOtherCondition)
obj = new DerivedType2();
else
obj = new DerivedType3();
I’m not sure how your syntax would apply (if at all) in a situation like the one above.
Isn’t that an irrelevant point considering that you can’t use var in that case either? So when the left and right hand side of an assignment is different you always have to explicitly do the type at both ends.
What exactly is the problem with relying on IntelliSense? Following that argument, you’re saying that programmers should be memorizing (or wasting time digging through) every single API they use instead of having it all at their fingertips as they type. While it’s noble to know a library well enough to leverage it without IntelliSense, why not reduce the mental burden so we can focus on other tasks? The same thing goes for var. IntelliSense is one of those advances in programming that can and should be taken completely for granted; good IDEs have it, and there’s no reason they shouldn’t.
Taking it a step further, perhaps we should edit blocks on disk by hand with a magnet, rather than writing assembly? Or push and pop registers instead of writing C? Roll our own web servers instead of using IIS or Apache? No. Unless that’s the goal of your project or your language is structured such that these concepts are necessary, there’s no reason to worry about the lower layers in most cases.
Var is just another small evolution, another abstraction we can use to free ourselves from writing mundane boilerplate code while we focus on solving real problems instead of struggling with the environment. You shouldn’t worry about what’s going on under the covers unless you have a good reason to dig into it. Those that abhor syntactic sugar should consider whether there is really a drawback, or if it’s just that they want to keep membership in the “elite” club of programmers who “learned the hard way” by being forced by compilers to specify their types.
I don’t have intellisense at all if I’m reading code snippets on websites like SO, or on Github.
Apart from that, Intellisense hovers are still slower than my own reading.
Whilst I can agree with the sentiment, why would you use a screwdriver when you have an electric one in the cupboard? The point; you should still be able to use the normal screwdriver – otherwise without Visual Studio or a similar IDE with such good intellisense you will be a very poor programmer (or at the least bring your code output to a standstill.)
This is however not what the author was trying to suggest. As the above comment here notes, you read code with your eyes not rolling over variable with the mouse to highlight the type. Therefore it stands to reason that by providing type information in the text you increase the overall readability of the code. Further why go out of your way to make your code less readable to save (in some instances) a couple of key strokes which could be tabbed through with intellisense?
If you have been around long enough, you will know that not being explicit in your code will lead to many hours wasted bug fixing. Anyone who has programmed before Visual Studio was the tool it was today will know this pain all to well (and even now when working with other peoples code in VS). But as with all things programming, the cycle of the people who have learnt from their mistakes age and as such are seen to be out of date. The same mistakes are made, the people making those become old and out of date. The cycle continues ad nauseam.
“To this:
var {Variable} = new {VariableType}();
That’s all well and good if someone wants to code that way, because C# is still a strongly typed language and you could still, given enough time or the right tools, figure out what type the variable is. ”
Huh? Just look to the right. No time or “right tools” required.
This is a pretty specific case. It happens fairly frequently, but what about:
var {Variable} = _GetVariable();
Then you are left with having jump to the _GetVariable() method declaration or using the IDE pop-ups to determine what the return type is.
The argument for using var is that we shouldn’t care about types. We should use descriptive variable names that describe a concept, and rely on Intellisense to figure out what we can do with it. (Or if you are just reading code, to simply read the methods that are being applied to the variable.)
That’s a nice theory or philosophy, but I don’t see it being practical in most situations I face. I find myself often having to figure out the implementation of classes and switching to the use of other classes if the implementation is wrong, inefficient, or the programmer that created it was making other assumptions about it’s use. (Yes, I know I would save typing when switching classes since I don’t have to rewrite “var”.)
I find that knowing the type is very helpful. It allows me to use the variable/class much more efficiently than if I just “try it and see if it works”.
Brad, you are correct. This is a worrying trend. It seems to be due to lazyness or unwillingness to type out the type name. Thanks for summing things up here.
I might have agreed with you back when I was a hardcore Java programmer, but I’ve spent the past 1.5 years coding Python, and coming back to C#’s world of nonstop explicit typing feels so unbelievably tedious and cumbersome, especially when generics come into play. Believe it or not, it is possible to write very good programs without type annotations littered everywhere. A well written Python program may have no type annotations, yet be surprisingly clear in its type expectations. To some extent, type annotation can be a crutch that allows people to get away with poor documentation and poorly organized code. “var” softens the blow of explicit typing a good deal by giving the programmer the best of both worlds: let your function definitions do your type declarations for you, and leave them off of your variables.
I do use “var” a lot in my c# applications and they work just fine and I do not think I am creating a problem for subsequent developers. One has to bear in mind that checking the type of a referenced procedure/function etc is as easy as hovering over it with the mouse to reveal it’s return type. My preference is I will use var wherever I can get away with it.
Thanks for saying it or me Brad. While I can see some advantages to inferred types, I see them so frequently misused that I’d prefer to eschew them altogether rather than encourage others on my projects to overuse them.
What I really love about the whole var or not to var war is when Programmers obviously misstep in their arguments…
Like this very common one:
var orders = GetOrders();
foreach(var order in orders) {
ProcessOrder(order);
}
I don’t care if Customer.Orders is IEnumerable, ObservableCollection or BindingList – all I want is to keep that list in memory to iterate over it….
Obviously the programmer here does actually care, he cares for a specific capability of the object, a capability which the interface IEnumerable provides… He also care that it’s an IEnumerable of types that ProcessOrder can handle, he might not care for the specifics.
By using var I actually lose that statement from him, where if he had written the code as:
IEnumerable orders = GetOrders();
foreach(var order in orders) {
Process(order);
}
it is explicit, and I know when I read the code that he didn’t care for anything more specific. So now I can’t go and give him a Dictionary of orders as that may break his code. And before you say “but it’s type safe, that would cause a compile time error”:
Not if there was another method that would match.
public void ProcessOrders()
{
var orders = GetOrders();
foreach (var order in orders)
{
Process(order);
}
}
private void Process(IOrder order)
{
order.Process();
}
private void Process(object unprocessable)
{
Console.WriteLine("Can't process object of type: " + unprocessable.GetType());
}
That is sort of a silly example as it stands, but you can’t rule out the possibility of hitting into the scenario it outlines… And by returning something which lives up to what “ProcessOrders” asks, you might just have caused a side-effect way down the system.
Obviously there should be tests, but there shouldn’t be tests for things the compiler could have caught if we actually told it what we wanted rather than asking it to just figure it out on it’s own…
If you come from a dynamic language and use that as an argument, you shouldn’t be using var, you should be using dynamic, which is a keyword I both Love and Hate… I love it when I can use it, I hate it when you hand one over to me… I have done my fair share of programming in dynamic languages as well, and also use the dynamic capabilities of C# allot, and I am loving all of it… But var has nothing to do with that…
I totally agree with the article. Some comments which may be worth integrated from a reviewer perspective …
– Code is 1 time programmed and 15 times read. Reading is more important than performance in coding. Therefore, any second lost in reading cannot be restored by a second lost typing three to five characters and then hitting tab for IntelliSense autocomplete of a type (that is the same argument of the MSDN documentation).
– IntelliSense is not available in VS 2013 everywhere. If as a reviewer you review a change in source code control, you very often use a diffing tool.
– Western society read from left to right. The Type in the beginning eases the things.
– I also think that “var” can be easier to read, but just if the rest of all good practices (like naming) is maintained. Unfortunately, that rarely happens (especially from programmers with a limited experience). And guaranteed never when var is used always.
From a theory perspective:
– I do not believe it was the intent of the C# design to make var general purpose. Otherwise it would be in the first release. It is for anonymous types and therefore it is in the language.
A final word: I believe, that this demon cannot be put back into Pandora’s box (Anders Hejlsberg fault). New programmers will never learn it differently, developers with different background (e.g. python) will continue with their programming style and convinced var seniors block/ignore any coding styleguide stating otherwise. I saw this social behavior in open source and industrial environment. I just have one request for all var advocates: If you are a limited experience programmer, use types. If you understand (and experienced) your decision pro var on the reviewer and peer programmer, and accept this, then use var.
(limited experiences: that limit can be pretty high. I am 15 years programming and still learn every day)
“From a theory perspective:
– I do not believe it was the intent of the C# design to make var general purpose. Otherwise it would be in the first release. It is for anonymous types and therefore it is in the language.”
There are some rather poor arguments against var here, but this is the worst. If it was intended only for anonymous types then it would only be allowed for anonymous types, rather than adding extensive support for type inference to the language. That something is missing from the first release of any piece of software tells us nothing about its intent.
Am I missing something here? I dont see the relevance of this topic. Var itself is neither bad nor good, it is just something that you need to use in some cases and can use in others. Var does not break the concept of type safety.
Var can increase readability of the code.
Why should we not use var in order to not break compatibility with pre C#3 compilers? There is already a very good chance that my code won’t compile with a compiler that does not support C#3 so why should I stay away from something simple as var?
“Var does not show the underlying type”. True if you program with notepad. Is anyone you know doing this?
In my eyes this is just a pathetic discussion. Are we going to discuss the usage of #region or // vs. /**/ next?
I agree with this article. I’ve been seeing a lot of developers overusing the var keyword. The intended use was explained very clearly in this article. The reasons I heard for using var are…write less code on declarations, decoupling code from concrete types, DRY (Don’t Repeat Yourself” principle, encourages better variable naming, and some well known architects use it a lot on demos. Except for the last reason, all the other reasons have some validity. The last reason is not valid because these were demos, how often do architects write code, and being an architect doesn’t mean they know the best practice principles of software engineering.
All the other reasons are positives, but overusing the keyword causes more negatives. Overusing var prevents developers from easily reading the code without intellisense. The developers would need to jump from one code section to the next to figure out the type. This is the same symptom caused by the yo-yo anti-pattern. In addition, sometimes developers don’t have intellisense to read the code (code on the internet, code in a repositiory, while comparing changes in file).
Using var to decouple the code from concrete class references is not a good enough reason. There are better more effective ways of doing that…interface development, and most of the creational patterns. Now that I’m thinking about it, if a variable declared with var is assigned to a class that implements more than one interface, then it would cause a lot of readability issues and a design issue. The example below shows how var can be very confusing when implementing many interfaces.
Like many developers, I initially overused the shiny new toy called “var.” Then one day I was writing some SharePoint code in a WinForms program, and while tracking down a weird bug discovered that the compiler decided my var was referring to a WinForms control rather than a class in the SharePoint object model!! It compiled and ran without complaint, but it gave wrong results. I immediately swore off the use of var unless I have no choice, such as with anonymous types and LINQ queries. Misusing var can introduce bugs and make code less understandable. Writing the full name of a type avoids the bugs and is valuable additional documentation of what the code does. Saving a bit of typing is the only benefit of using var and is not justified, IMO.
Nobody can reconstruct why the compiler has mistaken the types in your specific program! It is not clear, if it was rather the programmer’s fault or not . Theoretically this should not happen, because the aliased(!) type is determined by the c# compiler.
Yeah… I agree with what Joe said about the usage of var, but then again, Joe also made a good point on var and its usage.
Definitevely Joe counter argued cleaerly on var usage and after that I could not agree more with Joe when he says Joe is right oin var usage. Though Joe reply that Joe could have used var better, Joe on is first reply webt on about using var on a very explicit way.
Yeah… Joe is the man!
PS: let’s write War and Peace in acronyms and mono sylabic words it will be more readable and probably Joe will agree.
PS2: Joe is a lamb. beeeeee! 🙂
For those who are in favor of “var” and say that intellisense can solve all your problems, I say: Don’t use var and let auto-complete solve all your problem.
But in reality, what I think is: don’t make code that depends on your IDE. intellisense and auto-complete is just sugar. It should be easy to modify your code from any editor.
IMO your code should not only act as a piece of software, but also as documentation. ie explicit typing adds to your documentation.
Wasn’t var introduced together with LINQ?
It always felt to me it only has a right of existence together with LINQ queries and the “horrible” resulting return types of such. For everything else, be a clean developer and write down the type. If your types are that long and unreadable you have problems elsewhere.
the original documentation for var was only use it when you dont know what you are getting
the next week it was changed to use it all the time
then next week only when you don’t know
var is for extremely lazy programmers
Intellisense makes it so you don’t have to type as much
Use var sparingly
var is more readable. Variable names always begin in the same column, the 5th from the left margin.
Stop Using VAR Everywhere And Think Before Using Underscore With Private Variable In C#
Good article Brad, thanks for taking the time to put it together. Think it could be a losing battle though – many years after you posted was recently told in no uncertain terms by a dev manager interviewing me that it would be mandatory in their code base – I didn’t get the job – phew! As a general thought I’ve wondered why people would want to make their C# look like JavaScript?!
Saw something similar to this in our company code and I wanted to scream –
var result = (DataTable) null;
var is for lazy developers and lazy developers write bad code
The question was asked: If it is “insane” to not type your variables explicitly then why is it “sane” to type your subexpressions implicitly?
Examples used to illustrate implicitly typed subexpressions include:
1) string fullName = customer.FirstName + ” ” + customer.LastName;
and 2)
decimal rate = 0.0525m;
decimal principal = 200000.00m;
decimal annualFees = 100.00m;
decimal closingCosts = 1000.00m;
decimal firstPayment = principal * (rate / 12) + annualFees / 12 + closingCosts;
with the question: Let’s suppose that you believe that it is important for all the types to be stated so that the code is more understandable. Why then is it not important for the types of all those subexpressions to be stated?
Dealing with the first example, most programmers know the behavior of C# objects when used in a string context — that their ToString() method is invoked to produce a String object that can be used in the concatenation. And certainly there are some inexperienced developers who do not know this, but they will eventually be experienced enough to know this is C# object behavior in string contexts. Then they will know, as every C# developer should, what type these subexpression objects are in a string context — and how they got to that type. In essence, this common knowledge means that these subexpressions actually are of known type to almost all experienced developers. No need, consequently, to explicitly type them since, for all practical purposes, just looking at the code tells developers the following: Hey — string context — their type will be String.
In the second example, a similar albeit slightly less well-known type conversion takes place. For inexperienced developers, it only takes a few interactions with the numerical types to realize that conversions are happening. Again, just reading the code is all it takes to reach immediate conclusions about types.
So we can conclude, or at least I hope we can, that the reason why all these subexpressions do not need explicit typing is because, in the thinking of experienced developers, their types are apparent when reading the code.
These excamples use well-known and extremely common built-in types, so may not represent the edge case example where types or type conversions are not well known. In these cases, the developer would be better served by explicitly-typed expressions and subexpressions.
But let’s entertain a different kind of relatively common circumstance: Your next backlog item says something like, “In the public-facing records system, fetch the customer records on a month-by-month basis, order them by date, send them to the message queue for consumption by other systems.” A follow-up backlog item says: “Create an XML file of these records and send them to the (by now) antiquated logging system.” So you pull up the code, written last year by someone else and which you’ve never seen, and you’re happy to see there already is a data layer API you can use to request the customer records. An example you find in another method is:
var records = customerData.fetchRecords(startDate, endDate);
foreach(var record in records)
{
if(record.HasCustomerIssue())
{
record.HandleCustomerIssue();
}
}
And you’ve worked with the message queue system before and know the method to add to the message queue expects a List. But you don’t have access to the code base behind the data layer, so you don’t know the type of var records, and you can’t know it by reading (like you could in a string context), so you have to step into the code with a debugger. Ugh. And maybe you can do this, and learn its run type, then the question becomes how do you make use of that new information? How nice it would be to be able to make use of intellisense as you write, but that means typing the code explicitly enough to be able to invoke intellisense with class specificity for the type you wish to access a method or property on. The real problem here: you are trying to write code, and you have no easy and quick way of knowing the type of the objects you are being asked to use. Is records a DataTable subclass, a List, an Array, IQueryable, type Iterator, etc?
I’ve been developing for literally 30 years. It has been my experience that the speed of debugging code after you write it is as important as the original purpose of the code, especially when numerous developers may work on it over time.. var on everything is ridiculous and lazy. When scanning thousands of lines of code it makes it difficult to follow the flow, purpose and the outcome of code. We have a newer developer that has started changing explicitly declared variables for var in existing code, it’s infuriating. var has it uses when the type is anonymous or the type is only used for a few lines of code. His argument is that it makes it more readable…huh? Not to mention it is a nightmare trying to do compares on code in TFS to see what changed. I truly wish some people would stop pushing var as some standard to be used everywhere. What happens when the underlying code is changed in derived classes when the new keyword is used and you’re really wanted the base class method or properties? It’s gong to break. Same with accessing interfaces if you blindly go changing everything to var.
Hey man! You are 100% right about this! I’ve disliked it form the start.
Sure, a line like
var person = GetContact(someid);
// some lines of code
person.LastName ="Smith";
does not tell me the exact type.
Point is that the type is irrelevant until the variable is used a few lines later.
Does that line specify the type? Of course not. So then do we look up the declaration? I don’t – I hover the mouse and see the type – without scrolling.
Works just fine with var as with any specific declaration.
The idea of backward compatibilty is rather odd. ‘var’ was introduced in 2008. So you really expect me to write code for C# 2005, a version without LINQ, without auto-properties, without async? Get real.
As many said, using var is (usually) using DRY.
That’s not lazy programming, that’s efficient programming.
Nothing wrong with that!
“I don’t – I hover the mouse and see the type – without scrolling.”
Really? I’m hovering my mouse over your code now and see nothing about the type. Pretty hard to hover a mouse over code in a video tutorial as well… ergo your argument is defunct.
Next you’ll be arguing that saying “Hey m8, r u out tonight, c u l8r if so” is efficient speaking as well.
Lazy is as lazy does. Sugar-coat it all you like, it won’t change the bitter taste your argument leaves in the mouth.
I’m a person who used to be against var and now I use it all the time. The main reasons why:
– Shorter lines, less clutter
– Variable names line up nicely, which helps with readability and block editing
– Less retyping when I refactor code
I review a lot of code where there is no intellisense and I rarely find myself wishing I knew the type. If I don’t understand the semantics of the code, I insist on better symbol names and/or comments.
I personally don’t see a problem with including type hints in the name. e.g.: individualContacts, backgroundColor, randomBasketFruit. This makes the code easier to read and improves predictive typing in IDEs that support it (nearly all of them do now). I’ve never once felt tempted to use Hungarian notation over a plain word for the type hint.
Lastly, I’ll say that I mentor developers and they usually balk at my use of var initially. It feels wrong because it resembles the loose types of other languages. I ask them to give it a try and focus on writing good variable names. After a few weeks, they generally start to like var and even use it in their personal code.
So my recommendation to most people is just give it a try and focus on code semantics over type clarity and see if you still feel the need to be explicit about types.