Making Code

This guide is currently woefully incomplete. The guide may be substantially organized. However, the information in this guide should be accurate.

Some basic definitions
Definition of a computer program

There is a very simple definition of a “program” that a computer may run. A “program” is a set of ordered instructions. (Or, to phrase that in a different way, a “program” is a set of steps to be performed in a specific order.)

Understanding order

The term “ordered” (or “sequenced”) is simply saying that the instructions are placed in a specific order. To use an example unrelated to computers, consider these instructions:

  1. Put on socks
  2. Put on shoes
  3. Take off shoes
  4. Take off socks

Many people perform that process nearly every day. Trying to perform those steps in a different order may be notably challenging (and generally considered to be undoable). Similarly, a computer should complete a check on whether a required password was provided before the computer provides sensitive information or accepts requests to run certain commands.

So, a computer program is not just a collection of instructions. Rather, a computer program is a collection of instructions that exist in an order that somebody came up with.

(Slightly more specifically: a computer program is a set of ordered instructions that a computer may perform.)

In earlier years, most computer programs were sequential. This is simply meaning that they would run one instruction, and then another instruction. In today's world of hyperthreaded and multi-core CPUs, the CPUs often try to use parallelization. That simply means that the computer tries to perform multiple instructions at the same time. For example, a person could try to put on a sweater/jacket/coat and a shoe at the same time. However, sometimes some tasks need to be performed before other tasks. For instance, a person who will be wearing three layers of clothing (a T-Shirt, a sweater, and a jacket) should not expect to start putting on the jacket before the T-Shirt is being worn.

Similarly, a computer program might be able to print a document and save a file to a disk at the same time. However, the computer program should not report that the file has been loaded from the disk until the computer is entirely done loading the file from the disk. Some things can be done in any order, or even at the same time, while other tasks need to be done in a certain order.

A “programmer” is simply someone who makes a computer program. (This does not necessarily mean that the person made the computer program from scratch. Modifying an existing set of instructions will result in a new set of instructions, and so a person making that sort of change may also be called a “programmer”.)

Language

There are different computer programming languages available. Some information about various langauges may be found in the coding section, in sub-sections called “Full-blown computer programming languages” and “Command line shells / scripting languages”.

source code

Very often, programs are made in a language which humans find to be easier to use than the language that the computer is using when the program runs. For instance, a program may be made in C++ while a human is typing up the list of instructions. Then, the program gets converted into another language, such as “x86 Assembly Language”. The program may be converted yet again into another language called “x86 machine language”. The “x86 machine language” tends to use commands written out in binary, and may be the most efficient language for a specific computer's CPU. In contrast, the C++ language is much easier for humans to interact with an individual instructions.

[#lernplng]: How to learn a programming language
Similarities

Learning a programming language can be done very quickly when the learning person already has substantial programming experience. In many cases, languages are so similar that a person reading code in a language may be able to understand what the code is probably supposed to do, even if the person has never seen the programming language before. This is based on some common patterns, such as the word “AND” and the ampersand symbol (“&”, also known as the “and” sign), and “+” (the &lqduo;plus sign”) typically having some fairly similar meanings.

Creating code in a programming language may be a bit more challenging, because there are often rules that must be followed. Following all of the required rules might not be needed just to be able to read the code and get a basic understanding of what the code is trying to do. However, a problem (like the code simply refusing the run) can occur when the requirements are not met, and so writing code that actually works can require greater familiarity.

The similarities can actually be a trap, as discussed a bit by the sub-section labelled “Multi-lingual confusion” in the section about career paths: “Computer programming”/“Software development”.

However, the same similarities can be highly beneficial in helping to understand a new language. For example, VBScript is quite similar to some other programming languages. The most similar language is “Visual Basic”. There are also a lot of similarities to a language called BASIC, and JavaScript. Knowing BASIC and JavaScript (and some other languages like C), the author of this text was able to read Microsoft's official documentation of “Windows Script Host” and learned VBScript over a weekend, and then started programming professionally. (An established company, which had been around for decades, started paying for that computer programming.)

These similarities permit rapid learning of additional languages. There are a couple of recommendations to always keep in mind:

Beware of the multi-lingual trap

As was just mentioned, there is quite a bit of potential for “Multi-lingual confusion”. When working with a programming language, a person who already has experience with another programming language can think about rules that apply to a different programming language. This is very easy to do, and can cause problems.

Some people may quickly try to reduce such problems by giving some basic advice, like: “don't think that what happens in one language will also happen in another language”. The problem with that advice, though, is that what happens in one language is often very similar, and even identical, to what happens with other language. Guessing that similar results will occur is a technique that might actually work a vast majority of the time, saving time in many circumstances.

However, just keep in mind, at all times, that there are subtle differences. These differences can cause major troubles. Not only are these types of problems easy to make, but the problem can often be challenging to identify and fix. (As noted in the “Multi-lingual confusion” section, problems can exist because of “language nuances, even though a quick glance at the problematic code may often produce the incorrect result that the code seems to be correct.”)

This actually tends to be a greater problem for people as people have experience with multiple languages. People who know more languages are more likely to encounter these sorts of problems.

Surely the vast majority of experienced programmers must not regret learning at least two languages. There can also be great benefit to knowing multiple languages, such as seperating syntax from basic core logic. So, knowing multiple languages is not something that is entirely bad. However, there is certainly a potential to be affected by this one particular drawback which does exist, so definitely be aware of it. When learning a second language, be aware of the danger of “multi-lingual confusion”.

There may be some conventions, such as the naming of variables or functions. For example, a programmer using C might prefer to use a “constant” literal value is spelled with all capital letters, and use underscores between words. A programmer using Java may be more inclined to use camelCase, and JavaScript programmers may have a stronger case on why Simonyi Károly's “Hungarian notation” is useful. The convention that is used most commonly in one programming language may vary from what is used most commonly in another language. However, like differences in style guidelines or guidelines related to how much to comment, violations of those conventions are often less destructive to whether code actually functions.

Expect the first language to take longer

When working with modern languages, a skill known as “iterative programming” needs to be learned. That broad topic covers several ideas that are commonly found in languages, such as syntax (which is the requirement to have things look correct), variable declarations and assignments, keywords and operators, conditional statements, iterators (which allow loops), functional/procedural programming, and embedded comments.

Once these core principles are learned well, then there is often no need to re-learn those concepts.

By all means, try to not repeat the experience of a certain middle-aged or elder adult who decided to take some computer programming classes, and so signed up for three different computer programming courses to learn three different languages without any prior programming experience.

Note: Following represents a generalized plan for learning a programming language.

Environment
Intro: the need to learn

A lot of times, experienced programmers try to teach a person how to program in a specific language, and do not spend any time at all demonstrating how to perform certain simple tasks like understanding the programming environment. This is probably unfortunate, because the person who wants to create a computer program will need to learn the environment. Too often, the person learning how to program will be attempting to learn the programming environment at the same time that the person is learning the rules of programming. This approach does typically work, in practice. However, learning multiple things at once is presumably more complicated, and perhaps slower.

Becoming familar with the environment

Many people tend to overlook this. Getting used to a programming environment can take some time. Another task that can take some time is to install the software necessary to have an environment available. Many schools will have a compiler pre-installed (when many languages are taught), which might actually not be a very good thing.

Editing text

Know how to edit a text file. (This may involve using a editing a text file using any text editor that is available. When learning how to use an “integrated development environment” (“IDE”), understanding the preferred method of modifying source code may involve learning how to use the text editor that is part of that IDE.)

A sample program

Some other details, like compiling code (perhaps within an “integrated development environment” (“IDE”) or modifying “properties” of an object, may be worth demonstrated early. (In C++, modifying properties may be something that is worth delaying until after objects are discussed. In Visual Basic, Microsoft's Visual Studio has been known to dedicate screen space to showing properties. In that case, and since modifying properties are a common method to show output, there may be more benefit to an early demonstration of how properties may be easily altered.)

Once a sample program is shown, many instructors are interested in moving onto more details, like how to declare varaibles and being able to use conditional statements. However, there may be a bit more to learn about the environment.

Visibility

People have been known to lose their vision as they get older. Many instructors have shown information by using a projector, or by using on a “chalk board” (for anyone using such pre-electric technology). When doing so, information has often been shown on just one side of a room, and so that information may look smaller (and, therefore, may be harder to read) for someone sitting on the other side of the room.

Even if everyone in a class CAN read information, some people might benefit from having the information be EASIER to read.

If there is an easy way to control the font size, let people know how they can make that adjustment. This way, they can control the font size as much as they desire. Also, they will know how easy a person can adjust the font size, and that may increase the likelihood that any struggling person will think about asking for an increase to the font size.

Finding code that runs

Know what code gets executed. If the source code is typically interpreted directly, which has been commonly done with some languages (such as code that is run within a web browser or within Microsoft's “WSH” software), then this is probably pretty easy. For other languages which typically involve a “compiled” executable file containing “native” binary code (like how a C program is most commonly handled), or “byte code” (famously used by Java), then code location may be worthwhile.

If the language uses compiled code (rather than just using interpreted source code), then be able to locate the executable file that is created. This is another detail that is often skipped during a computer programming class. In practice, this is often unnecessary to be able to pass exams, like many “final exams” in college, or an exam related to getting an “industry certification”. However, this knowledge is required at some point: software companies typically distribute executable files, not full images of an entire hard drive. For this same reason, the code that actually runs needs to be locatable.

A (true) joke

An employee described his entire programming career to a co-worker. He had obtained a book about learning how to program in C(++). He “almost” got the compiler installed. He “almost” finished doing that.

(What makes this so entertaining is the clear understanding that the compiler is an absolutely essential part of the experience that C programmers undertake when they complete a class.)

Although this employee performed other work for his employer, obviously he was not providing substantial benefit from C++ programming skills.

A simple program

“Hello World” is famous. Displaying any sort of output is often used as an early example. Simply by having a program perform some identifiable task, a programmer can start to feel some level of control over at least a portion of what the computer is doing. The amount of control will certainly grow as the language is learned further.

Keywords, variable declarations, variable assignments

When discussing variable declarations, many students may question the need to specify different data types. When a DOS program was made in 16-bit C, the difference between using two bytes for a “short” integer, and four bytes for a “long” integer is a difference of two bytes. Although memory was costly in the 1990s and earlier, saving two bytes of RAM seems less important when computers have started to measure RAM in units like gigabytes.

There are a couple of reasons. First of all, people like speed. Using fewer bytes is faster. For example, a computer screen with a resolution of 1024x768 has 786,432 pixels. One standard way of keeping track of a pixel is to use 32 bits per pixel. If each pixel takes four bytes, that is three megabytes of information. Suddenly, a discussion that was mentioning four bytes has turned into three megabytes. So, even small amounts of memory can represent larger amounts of memory when multiplication occurs. Compared to larger amounts of memory, smaller amounts of memory can typically be processed easier (including being processed faster).

However, memory usage is not the only reason that data types can be helpful. When the programming language of C was gaining popularity during the latter twentieth century, there was a clearer need for restricting memory usage, and so data types helped to do that. The language of JavaScript makes no distinction between the data types used by a constant boolean and a variable that stores dozens of bits of mathematical precision. Even still, programmers may often use Simonyi Károly's “Hungarian notation” to create variable names that provide some indication as to what “type” of variable is being used. There is no memory that gets saved by using this technique, but yet some programmers choose to make references to different standard data type anyway. This simply proves that saving memory is not the only possible benefit to distinguishing between different data types.

There is benefit to a computer programmer being able to know what is happening. If a computer programmer identifies a variable as being “boolean”, or “unsigned”, these identifiers might be helpful to communicate what types of values the variables are expected to be storing. Communicating this information can help an experienced programmer to be able to be able to more quickly understand a variable's purpose.

The need to hand-craft memory optimizations is less of a widespread concern than during the years that available memory was much more scarce. Still, there is no need to slow things down unnecessarily. Clearly documenting how much a resource may be used is a technique that can help to limit ineffefiency. Since a computer program can be viewed as a form of documentation (which describes what a computer will be doing), using an appropriate data type is a quick and standard way to communicate some details about how the variable will be used.

Operators

Many languages will have common symbols, like the “plus sign” (“+”) being used for addition. However, there may be some differences, such as what character is used for XOR. Another example is shown by Wikipedia's section about “Exponentiation” being used “In programming languages” (which discusses how to raise a number to a power, like determining the value when a number is “squared”). In the language of BASIC, ^ represents raising a number to a power, while many other languages (like Perl, Fortran, COBOL, when using Unix-style scripts meant to be interpreted by bash, and over a dozen other languages and computer programs) may use the multi-character ** operator. The popular language of C did not have any such symbolic operators for exponentiation, but relied on the pow function stored in a “math.h” library file. So, find a list of operators and see if there are any that seem surprising.

In addition to learning the more common operators, do learn precedence. Typically, parenthesis outweigh multiplication signs and those have higher precedence than plus signs, but there may be some differences. For example, is AND given a higher precedence than OR?

Some other key items typically taught early

Key abilities include handling: Conditions (“if” statements), loops/iterations (“while”/“for”/“do”), code jumping (other than what is done for loops/iterations: this includes being able to declare and use “functions”/“procedures”/“routines”/“subroutines”), and creating comments.

Other items to learn

Other topics include: creating and using arrays, creating and using other custom objects, variable/object scope, and familiarities with standard functions like string manipulation. (For example, consider many of the functions in C's stdio, stdlib, and string libraries.)

Some other topics (which might be taught less frequently) might include: file handling, command line parameters, running other code (running an executable, perhaps integrating with another library), compatability (portability, and including feature compatability testing, which has often been done when programming code that runs in various web browsers), and communications (with devices, including device initialization, and network connectivity).

Other advanced topics might include object inheritance, permissions (if this wasn't covered at the same time as scope), and code signing.

Additional thoughts
The order

Do not be afraid to learn things in a different order than what this general guide presented. For instance, in the language of BASIC, a “Hello World” program is likely to use an internal command called “PRINT” (or the more commonly used abbreviation, “?”). The language of C is more likely to use a printf function. C++ may involve using the cout object, and JavaScript will likely also involve using an object (which is either the document.window object in the original style of JavaScript, which is what ran in web browsers, or some other alternative like the WScript object in WSH). So, those different languages used different approaches (an internal command, a library in a function, external commands that get executed, or objects). As just demonstrated, different approaches make more sense in different languages.

There may be other differences in languages that cause one topic to become more important than others. For example, when learning JavaScript, integration with HTML is typically an extremely early step that is covered. As a comparison, C can work with code written in other languages, but performing such tasks during the “linker” stage is not usually the first task that gets covered. So, when deciding to learn a new language, the general guide that was provided is a guideline of the types of things that are typically learned. However, expect that variations may provide the most sensible approach.

Other topics

There are certainly other topics that may be learned, such as additional details comparing how code may be optimized (for low usage of some resource, such as disk space or memory or network traffic or CPU execution time). Such topics may be taught during studies of topics like recursion or data sorting. Studies related to these topics are often focused more on how code operates, and less about language-specific syntax.

Another example of a language-independent topic is learning concepts like binary numbers and hexadecimal, or the binary “truth tables”. There may be some differences, like how hexadecimal digits are represented or what symbol(s)/character(s) are used to represent the XOR operation, but the general concepts are independent of a language's specific syntax.

Certain concepts related to re-usability (such as performing inheritance processes, like polymorphism) may be viewed as more important by some people (like people learning Java) compared to others (like someone who is using Unix-style shell scriptiong, or older-style C, and so is not even using an object-oriented language). Opinions certainly may vary, and this guide is certainly not trying to provide a convincing proof that this is the only way to learn a language. If someone is an established expert identifies how a specific approach would be better than this generalized guide, then that expertise may very well be quite useful.

Non-compound Commands/Statements

A computer programming language will typically have different commands. For example, the following program in the language of QBASIC may display a message to the screen.

PRINT "Hello, world!"

(When the term “PRINT” is used, many people might start thinking about a “printer”, which is a device that places ink onto paper. However, many programming languages use the term “print” to refer to displaying a message on the screen. These “print” commands may pre-date the common usage of computers connecting to a device that caused ink to go onto paper.)

In C, a corresponding command might look something like this:

printf("Hello, world!");

(In C, the most common “print” command is considered to be a “function”. That is why the print command has the letter f. The print command's name was meant to be a shortcut for the phrase “print function”. Functions will be described in more detail later.)

Values (Data in memory)
Simple data types
Literal
e.g., the symbol “2” (without the quotation marks) simply represents the number two (in many programming languages). Likewise, in many programming languages the string "ABC" (which may typically include the surrounding quotation marks) simply refers to the letter A, followed by the letter B, followed by the letter C. In this case, "ABC" does not refre to a name. It refers to data that does get stored in the computer's memory.
Variables

Variables are sections of memory that have a name assigned. For example, if a variable is named “foo”, then there is a section of memory that has been given the name of “foo”. The program can then have an instruction that refers to the name of that variable, and the computer will interact with the specific piece of memory that is related to that variable name.

However, the description provided so far isn't really complete enough. A better description is this: Variables are sections of modifiable memory that have a name assigned. What this means is simply that computer memory stores a value, and that value can be changed. This is why the term “variable” is used. The value in that memory can vary.

What that simply means is that a program can store a specific value (such as “2”) in the memory, and then later on the program can cause that memory to store a different value (such as “4”).

Constants

Named constants are similar to variables in that they are sections of memory that have a name assigned. However, the computer programming language may have a rule that prevents the memory from being changed as long as the program is actively keeping track of that memory location. A typical example is to have a computer keep track of the number “3.141592653589793238”. The computer will have that value stored in memory, and that section of memory may be given a name of “PI”. At some point, the program might be done with the task that actively uses that number, and so the program might stop keeping track of the section of memory that stored that number. (That section of memory might become available to be used for another purpose.) However, as long as the computer is remembering that section of memory, and the computer is remembering that the name “PI” is referring to that memory, then that memory should have the value of “3.141592653589793238”. If that memory does get accidentally changed, that will probably cause a problem which might not be easy to notice. Therefore, that memory should not change as long as the computer is using the name “PI” to keep track of that value.

If the person making the computer program makes a mistake and accidentally tries to change the value stored in that computer memory, the rules of the computer language specify that there will be a problem. Hopefully this problem will cause an error message when the computer programmer tries to test that the program is operating as desired. If the error is not detected until later, then perhaps the person who uses the program will see an error message. Seeing an error message is generally better than having the program create incorrect results (which people might not notice quickly).

As another example: a person who works at a bank may look at information at a bank account. That person might need to add more money into the bank account. The amount of money in an account is a number that changes relatively frequently, so that may be stored in a variable. However, the information related to that bank account will also include the name of the person who owns the money in that bank account. The name of that person will not change frequently (and might never change as long as that person is alive). So, the computer program may store the person's name in a “constant”, which will prevent the person's name from being accidentally changed.

In this example, the person's name might still be able to be changed. Perhaps the person using the computer will need to press some more keystrokes or use some additional mouse clicks. By performing a special procedure that is a little bit more complicated, the person working at the bank may be able to change the name that is related to the bank account. This special procedure will likely involve having the computer create a new “variable” so that the information can be changed. So, the idea of a “constant” is not that a person is unable to change the computer memory. The idea of a “constant” is just that the computer memory cannot be easily changed as easily. Specifically, the computer program cannot use the constant's name to be able to make changes to that memory.

Using a “constant” is often done to simply prevent mistakes made by end users, mistakes made by the computer programmers, or unauthorized changes by people who might be trying to cause trouble.

Naming convention

The rules for the name of a constant are often identical (or at least similar) to the rules for the name of a variable. (For example, some languages may say that the name cannot contain a number, or the rules might state that the name may contain a number but must start with a letter. The specific rules can vary between languages.) However, there may be different conventions. While variable names often start with lowercase letters, many languages will use a convention that the name of a constant will not use any lowercase letters. (Instead, all letters will be uppercase.) Because capitalization cannot be effectively used to separate words, then words are either crunched together with no identifier (e.g. MAXPATH) or are separated by underscores (e.g.: MAX_PATH) if that is allowed, or else maybe some other method (e.g. MAX-PATH would be separated by hyphens, although many languages would treat the hyphen as a minus sign that separates two words).

Words should be separated (with underscores) if the word breaks would not be easy upon reading the word. For instance, MAXIMALLOWED might stand for “Max I'm Allowed”, or “Maximal Lowed”, or “Maxim All Owed”. Out of all of those, perhaps only the first reading makes sense. However, what most readers will do is simply try to identify a word, and then see if they can successfully keep reading from that. Because there are at least three possible initial words, there's quite a bit of possibility for confusion, or at least slowdown because people may need to retry reading until they find something that makes the most sense. In that case, underscores can help a lot: MAX_IM_ALLOWED may be much easier for people to instantly understand. On the other hand, MAXPATH is more straightforward when reading it. Some people might prefer MAX_PATH for clarity, while many people would prefer MAXPATH for terseness.

Arrays
Storing multiple types

Note: this is a bit of a more advanced topic, and might be best to skip until after people have some solid experience of handling arrays and other types of data structures. However, this information is fairly specific to arrays, and this may help to answer a question that some people have when they start working with arrays.

Arrays are typically limited to only storing multiple values of a single type. For instance, an array might be an array of integers, or an array of boolean values. However, typically a language requires that all of the elements in an array are the same data type. An array will not contain fifty elements, and have the first third of the elements be integers and the second third of the elements be strings and the final third of elements by floating point numbers. Most languages will simply not allow that.

Other data structures, such as an “object”, may be able to store multiple different kinds of data. For example, an object might contain an integer and a string. Then, an array could be an array of that kind of object. Using this way of working around the limitation, an array could actually, indirectly, include various simpler types of data. Even when doing that, though, the array still directly contains nothing more than multiple copies of some specific type of data (even if that type of data is a complex type of data that stores other types of data).

The way that a computer stores an array in memory might often simpler than how the computer stores more complex data types. This simplicity may lead to some advantages, such as computers being faster at handling arrays compared to some other data structures that are more complicated. That is why arrays have these limits. Without these limits, arrays lose some of their advantages, such as simplicity in memory layout (and some speed that can exist because of that simplicity). People who need more flexibility will simply use different data types.

Mutli-primitive data structures

e.g.: Structures, classes, objects, and more complex data types (lists, queues, stacks, etc.). Also learn terminology such as “instance”, and how a class relates to an object.

Like arrays, these can store more than one variable. However, they may store more than one type of variable.

Here are some basic, quick (and possibly incomplete) definitions:

structure

Note: the term “data structure” is a generic term that describes various types of data layouts, including simple objects, lists, queues, and stacks. However, one of the earliest types of data structures was simply called a “structure”. In some/many programming languages, the term “struct” referred to this specific way of storing data. This section is specifically describing a “struct” (not the other was of storing data).

A struct is simply a collection of variables. The variables do not need to be the same type. For instance, there may be two copies of a particular struct. Each copy of the struct could have an integer and two strings. Both copies of the struct may have the same types of variables, but the variables within a single individual struct do not need to match the other variables that are part of the same struct.

So, a struct is a collection of data. The collection of data contains variables.

No methods

If a collection of data is being called a “struct”, then that collection of data does not contain functions.

The major change between C++ and C was that C++ started to have the ability to have functions that were considered to be part of a collection of data. However, if this type of collection of data has a function contained in that collection, then there is some terminology change. The collection of data is no longer called a “struct”. Instead, the collection of data is called an “object” or a “class” (The distinction between a “class” and an “object” has to do with the concept of an “instance”.) Also, if a function is part of that type of data collection, then there is another terminology change: the function is no longer called a function. Instead, a function that is part of an object or a class is called a “method”.

For this reason, a “struct” cannot have a function. Any collection of data that is called a “struct” will not have any functions. (The reason is simply that if a function was part of the collection of data, then the collection of data is no longer called a “struct”.)

properties, methods
properties

If you understand the concept of a structure, this description should make sense. Additionally, for this definition to make sense, simply know (for now) that an “object” is similar in concept to a “structure”. (The differences will be described later.)

In the following definition, the term “memory segment” simply refers to a variable, or something similar to a variable (like a const).

A “property” is simply a memory segment that is part of a data structure, such as a “struct” or an “object”.

There is no real technical difference between a variable that is part of a function, or a property which is part of an object. The only difference is that the term “property” indicates that this memory segment is part of a more advanced data structure. On the other hand, the term “variable” will often not be used when discussing an object. This is simply because the term “property” is preferred when discussing a memory segment that is part of an object.

Regarding structures: this may vary a bit between programming languages. Some programming languages may refer to a variable as a “property” (e.g. C++), while this kind of memory might still be called a “variable” more commonly when using another language (like C). Modern programmers should be familiar with both terms, and so should not be confused if another programmer uses one term instead of another. However, that is really only true when discussing a “struct”. If discussing an “object”, or the “class” related to that object, then the term “property” is generally used.

Perhaps the only time that the “property” of an object could properly be called a “variable” is when a person is trying to clarify that a property's value could vary, distinguishing that property from a “const” which would not be a variable. In general, though, the proper distinction to make is the fact that this memory segment is part of an object, and that is done by using the term “property”.

Programmers of some programming languages where objects are typically used frequently, like the languages of Java or Visual Basic, may find that the term “property” is almost always what is used.

method
object

An object is like a structure, but it can have references to things called methods. A method is really the same thing as a function. If an object is using a “function”, then that function is no longer called a “function”. Instead, that function is called a “method”. The only difference between a function and a method is that a method is something that is part of an object. So, both a function and a method perform the same conceptual task. They act the same. The difference is simply terminology. The term “method” indicates that the series of steps is considered to be one of the pieces that is part of an object, while a “function” is not considered to be part of an object.

One nice thing about the term “method” is that it is consistent across object-supporting languages. The term “function” is used in C, while some other languages use other terms like “procedure” or “routine”. However, all languages that support objects will use the same term, which is “method”.

In C++, the only real difference between an instance of a “struct” (which is the term that C and C++ use for a structure) and a “object” is that “objects” are capable of having methods.

An object does not need to have methods. If an object does not have any methods, it could just as well be a structure. However, a computer programmer can have an object contain a method.

Code branches
Conditional Statements

This might sometimes referred to as a “selection” statement. The idea behind that term is simply that the program will select which code to run.

Program flow control
Loops
while
...
for
typical for: iterating

To understand the word “iterate”, see: iterate.

The “iterative” “for” loop is something that really provides no substantial functionality beyond what can be done with a while loop. In C, the syntax of a for loop looks something like this:

for(init;condition;increment) statement;

The functionality of this code is absolutely identical to code which looks like this:

init;
while(condition)
{
statement;
increment;
}

and that code is exactly identical to:

init;
if(condition)
{
do
{
statement;
increment;
} while(condition);
}

(Actually, there may be some minor difference if a variable is declared in the “for” loop. The difference would be the scope of the variable. However, at least some older versions of the C programming language actually did not allow such localized variables, so such minor differences might not even be permitted by the rules of the language.)

Coding Culture Commentary: Why use iterative for loops?

Inexperienced programmers may struggle a bit to remembering the order of the different pieces of the iterative “for” loop. During that struggle, such prorammers may often wonder: if there are already two other ways to do things, why did somebody just have to go and invent yet another way to handle loops?

So, in truth, a for loop is really entirely unnecessary. However, the for loop is often the most common loop construct that is used.

The reason that people like the for loop so much is that it provides a rather standard structure to take care of some tasks that are commonly required for many loops. Using this standard structure can be easier for the programmer. Also, anyone who views the resulting code is immediately provided with some details about how the loop's “flow control” operates. Therefore, those details can be analyzed just by looking at one location, and then a person can continue to read more details to see what happens whenever the loop runs.

Speaking from experience, memorizing the syntax of a for loop does take some time. Also, the syntax can be forgotten quite easily if this type of loop never gets used much. However, after a little bit (perhaps a week or a few) of heavy usage, the practice leads to the syntax being memorized so well that the syntax no longer seems complicated. At that point, the structure of the for loop starts to appear to be a benefit, rather than a burden.

If an instructor is demonstrating beginning programming techniques, the instructor can describe what they are doing when they type/write each section of the for loop. Students can concentrate on whatever other topic is being taught, providing very little thought to the structure of the for loop's structure. However, the repetition helps to reinforce an understanding for the for loop's structure. Simply seeing a for loop repeatedly will help the correct structure to start to feel more natural. For people who are learning programming through self-study, simply looking at lots of code that involves iterative for loops will probably be similarly sufficient.

enumerating for

e.g., supported by Java, JavaScript, VBScript, C++-11 (a C++ standard released around the year 2011). This is not supported by some other programming languages, such as C-89 (a standard of C that was released around the year 1989).

In many cases, this is not substantially different than just declaring a variable and then iterating through a loop. However, using the “enumerating for” will sometimes result in code that is actually more concise, and quite possibly simpler to read, than using an iterator.

In some cases, at least with some programming languages, this method may allow traversing through an object without requiring knowledge about how many items exist within that object. This may be useful for code that may not have full permissions to jump to a specific object within an element, but which allows traversing. This may often indicate a rather unideal (slow) implementation, but permissions may actually require using an “enumerating for” rather than using an indexed array. So, this programming technique can definitely have its uses (perhaps only when working in an environment that imposes limits).

do

The “do...while” loop gets a bit of an unfair reputation as being rather useless. In practice, many programmers simply do not use the “do” loops very frequently. However, the “do” loops are actually very simple for the computer to understand. In fact, many assemblers will simply treat a “while” loop as a “goto” statement (or a “condition” statement) followed by code which looks identical to what is used by a “do” loop. Also, a “for” statement involves multiple parts including code that looks like what happens with a “while” statement. The “do” loop is actually the simplest variation, requiring the fewest computer instructions.

do...while

This specifies to perform an operation, and then to keep performing an operation as long as a certain condition is true.

In many languages, including C, the word “while” starts a while loop, unless it comes at the end of a “do” statement. This is actually rather unfortunate, having the same word be used in two different (although very similar) ways depending on how the word is used. It would be much better if the word was completely different. For example, let's say that a person accidentally comments out the line containing the word “do”, but was trying to comment out the previous line (and was just being careless about which line got commented out). In this scenario, the code would probably operate identically for the first run through the loop. However, if the condition is true, then the result will generally be an endless loop, instead of running through the code again. This provides different results than expected, and those results happen when the program is running. In general, it would have been nicer if a different word (like “docond”, to specify the condition for a “do” loop) was used. Then, if the word ““do” was commented out, an unexpected “docond” would lead to a compiler error that would probably be noticed by the programmer (instead of the end user) and would probably be a situation that is easier to debug. However, the word “docond” is simply fiction written for this paragraph. The reality is that the word “while” is what is used to end a simple “do” loop.

do...until

This specifies to perform an operation, and then to keep performing an operation as long as a certain condition is false. Many programmers find this sort of loop to be fairly useless, because the approach is not really substantially similar to using a “do...while” loop with a different test. So, specifying “do { /* something */ } until (condition);” is effectively the same as specifying “do { /* something */ } while (!condition);”. (Although this example makes the while loop look slightly more complicated, because it has an added exclamation point, that really depends on what the condition is. In some cases, the while loop's version may actually be slightly similar. However, in all cases, the difference is very likely to be extremely minimal.)

Coding Culture Commentary: until

Some languages, including C, do not even support a “do...“until” syntax. (People who use such languages have been known to think that supporting a “do...“until” syntax is a rather pointless addition to the language, and so complicates the language in a very minor, but entirely useless, way. The biggets benefit seems to be making things easier for people who do not have experience with reversing conditions. Although that may seem to make a learning curve be a bit easier, it does require that people learn yet another loop construction. People would probably be served better by spending the time learning how to write negated conditions, as that skill ends up being useful far more.

Coding Culture Commentary: i, j, k

Many programmers will use a variable named “i” to represent an iterator. Also, many array indexes are equal to iterators, and as a result, array indexes have often started to use that same variable name.

However, don't typical “style” guidelines suggest that names should reflect how they are used? Also, even if single variable names are being used, why is the letter “i” used as the first variable? (The use of the letters “j” and “k” seem clear, since those letters follow the letter “i” in the alphabet. But why was “i” first?)

The use of the letter “i” seems to have originated from the language of FORTRAN. (That seems to be a general consensus from multiple sources, although Wikipedia's page for “loop counter” has a “Citation needed” for that claim, at the time of this writing.) Variables were given different “types” based on the first letter of the variable name. Variables that started with the letters “i” through “n” were treated as integers. A comment, at Stack Overflow, about variables names notes that even FORTRAN may have simply been following a precedent used by mathematicians.

As for single-letter variable names, the primary advantage to keeping the variable names short is clear: speed (in typing, and even moreso when writing, and even reading). Keeping the iterator's variable names short can reduce the size of lengthier phrases (like “
array[c]
”).

Keeping that phrase short can help with being able to quickly read code. (As a constrast, see the example at A comment, at Stack Overflow, about variables names.)

Typically, the primary obvious cost of using single-letter variable names is that the names of the variables do not communicate much about what is in the variable. However, this cost is offset a bit by culture. Experienced programmers may be so familiar with the standard practice of using ““i”” as an iterator, that they suspect that any variable named “i” is probably a variable storing a value that is used to help the code function, like a loop iterator or an array index. Such values are far less likely to be saved to long-term storage (such as a user's file stored on a hard drive).

This common use of such conventions ends up overriding the cost that would otherwise be associated with such a short variable name. There may be some other conventions, such as a variable named “f” when it is storing some temporary data that includes “floating point” information. However, of all these conventions, using “i” (and then “j”, and then “k”) is probably the most common.

In theory, if additional letters were to be used, then “l” would be the next logical choice, because “l” comes right after the letter “f”. However, in reality, “f” is used quite a bit less frequently than “j”, and using more single-letter variables is really quite uncommon. Another part of this psuedo-standard is that the single-letter variables should typically be initialized and then used within a fairly small amount of code. (The amount of code may vary, but 1 to 4 lines, or perhaps even 1 to 7 lines, may be a rather typical amount.) When there are more than three variables needed, the amount of code is starting to be less trivial. When the code is getting lengthier and or substantially complex, using longer variable names often gets to be a task that is not only justified, but also expected.

One downside is that the letter j may often look like a semi-colon, particularly when hand-written (sloppily). Wikipedia's article for “loop control” states, “ A variant convention is the use of” repeated “letters for the index” (like “ii” and “jj”) “as this allows easier searching and search-replacing than using a single letter.” Another convention is to use idx since that is relatively short, and yet more descriptive. (Comment at Stack Overflow about using idx refers to this as “often used”.)

Functions/Procedures/Routines/Methods
[#termfunc]: Functions/Procedures/Methods/Routines

(This text is in a section describing computer programming. For additional discussion on the term “procedure”, as it relates to rules (e.g. procedure/processpolicy/guidelines), see: glossary: procedure.)

First, a description of a “method”, which may tremendously help some people who have some programming experience. (If this description makes little to no sense, don't worry about it for now.) A “method” is simply a “function” (or a “procedure”) that is related to an object.

Further details about a “method” are/will be included in the descriptions about objects. For now, simply know that a method is a function/procedure. Therefore, the descriptions here about functions and procedures will also be true about about methods.

A “procedure” is simply a set of instructions.

In many languages, the term “function” is used to specify a procedure. The only difference is that the term “function” implies the mathematical concept which produces a result that can be referenced. A procedure might not generate a result that can easily be used by other computer code. Using such a strict definition, a function could be considered to be a type of procedure. However, this minor distinction is not necessarily relevant. In practice, the terms “function” and “procedure” and “routine” are frequently used interchangably, meaning that in many cases there is no real significant difference between the terms. Often, the only difference is simply the convention of using a term that is preferred when a specific language is being used. In Pascal, programmers use procedures. A programmer who needs a function created will create a “procedure”. In C, programmers use functions. Even a procedure which does not return a value (because the type of return value is a “void”) is still called a function. Sometimes, perhaps particularly with assembly language, a procedure might be called a “routine”. All of these concepts refer to a series of steps.

New computer programmers might be quick to point out that this definition (“a series of steps”) sounds identical to the definition of a “program”. This is true. The broad concept of a procedure really isn't all that different from the broad concept of a program. The only real difference is just that a procedure is typically a more limited series of steps. A program often refers to a series of steps that gets run when the operating system specifies that a program needs to run. For instance, when a user types in a command line, or double-clicks an icon on a desktop, or chooses a program from a “Start Menu”, then the operating system will start a program. A procedure typically refers to a series of steps that is part of a program. For instance, choosing a menu option (like choosing to save a file) may cause a procedure to run. In both cases, a user's actions caused a series of steps to be performed, so both terms (“program” and “procedure”) refer to the same type of general concept. A procedure just typically refers to a smaller series of steps.

Commonly, a function will “return” a value. In mathematics, one of the most well-known functions is called “absolute value”. If the “absolute value” function is provided with the number -37 (negative thirty seven), the “absolute value” function returns a value of 37 (positive thirty seven). In math, a function is defined as a process that always returns the exact same value anytime that it is provided with the same value.

Often, computer programming will act similar: a function is expected to provide the same output (results) anytime that the same input is provided. (The term “input” may refer to memory variables provided to the function, as well as other sources of input, such as data that is read from a file that the function may access.)

(The following information is a bit of a more advanced topic, and may not be necessary before people have some experience creating functions.) Furthermore, there is commonly a limitation that a function can only return a single value. (That might initially sound like a significant limitation, but it is not. There is a workaround, which is to return a complex result, such as a “structure” or an “object”, or a file, or a memory location. By doing that, the function can effectively communicate multiple pieces of information without violating the rule of only returning one result.)

length

This is probably (highly) subjective, and is discussed in the section on code style.