Is JavaScript an Untyped Language

Does untyped also mean dynamically typed in the academic CS world?

Yes, this is standard practice in academic literature. To understand it, it helps to know that the notion of "type" was invented in the 1930s, in the context of lambda calculus (in fact, even earlier, in the context of set theory). Since then, a whole branch of computational logic has emerged that is known as "type theory". Programming language theory is based on these foundations. And in all these mathematical contexts, "type" has a particular, well-established meaning.

The terminology "dynamic typing" was invented much later -- and it is a contradiction in terms in the face of the common mathematical use of the word "type".

For example, here is the definition of "type system" that Benjamin Pierce uses in his standard text book Types and Programming Languages:

A type system is a tractable syntactic method for proving the absence
of certain program behaviors by classifying phrases according to the
kinds of values they compute.

He also remarks:

The word “static” is sometimes added explicitly--we speak of a
“statically typed programming language,” for example--to distinguish the
sorts of compile-time analyses we are considering here from the
dynamic or latent typing found in languages such as Scheme (Sussman
and Steele, 1975; Kelsey, Clinger, and Rees, 1998; Dybvig, 1996),
where run-time type tags are used to distinguish different kinds of
structures in the heap. Terms like “dynamically typed” are arguably
misnomers and should probably be replaced by “dynamically checked,”
but the usage is standard.

Most people working in the field seem to be sharing this point of view.

Note that this does not mean that "untyped" and "dynamically typed" are synonyms. Rather, that the latter is a (technically misleading) name for a particular case of the former.

PS: And FWIW, I happen to be both an academic researcher in type systems, and a non-academic implementer of JavaScript, so I have to live with the schisma. :)

How can the lack of return type polymorphism in untyped languages be alleviated?

In a comment I made an assertion without justifying myself: I said that return type polymorphism isn't a meaningful concept in an untyped language. That was rude of me and I apologise for being so brusque. What I meant was something rather more subtle than what I said, so please allow me to attempt to make amends for my poor communication by explaining in more detail what I was trying to get at. (I hope this answer doesn't come across as condescending; I don't know your base level of knowledge so I'm going to start at the beginning.)

When Haskellers say "return type polymorphism", they're referring to one particular effect of the type class dispatch mechanism. It comes about as the interplay between dictionary passing and bidirectional type inference. (I'm going to ignore polymorphic _|_s like undefined :: forall a. a or let x = x in x :: forall a. a. They don't really count.)

First, note that type class instances in Haskell are syntactic sugar for explicit dictionary passing. By the time GHC translates your program into its Core intermediate representation, all the type classes are gone. They are replaced with "dictionary" records and passed around as regular explicit arguments; => is represented at runtime as ->. So code like

class Eq a where
(==) :: a -> a -> Bool
instance Eq Bool where
True == True = True
False == False = True
_ == _ = False

headEq :: Eq a => a -> [a] -> Bool
headEq _ [] = False
headEq x (y:_) = x == y

main = print $ headEq True [False]

is translated into something like

-- The class declaration is translated into a regular record type. (D for Dictionary)
data EqD a = EqD { eq :: a -> a -> Bool }
-- The instance is translated into a top-level value of that type
eqDBool :: EqD Bool
eqDBool = EqD { eq = eq }
where eq True True = True
eq False False = True
eq _ _ = False

-- the Eq constraint is translated into a regular dictionary argument
headEq :: EqD a -> a -> [a] -> Bool
headEq _ _ [] = False
headEq eqD x (y:_) = eq eqD x y

-- the elaborator sees you're calling headEq with a ~ Bool and passes in Bool's instance dictionary
main = print $ headEq eqDBool True [False]

It works because of instance coherence: every constraint has at most one "best" matching instance (unless you switch on IncoherentInstances, which is usually a bad idea). At the call site of an overloaded function, the elaborator looks at the constraint's type parameter, searches for an instance matching that constraint - either a top-level instance or a constraint that's in scope - and passes in the single corresponding dictionary as an argument. (For more on the notion of instance coherence I recommend this talk by Ed Kmett. It's quite advanced - it took me a couple of watches to grasp his point - but it's full of insight.)

Much of the time, as in headEq, the constraint's type parameters can be determined by looking only at the types of the overloaded function's arguments, but in the case of polymorphic return values (such as read :: Read a => String -> a or mempty :: Monoid m => m) the typing information has to come from the call site's context. This works via the usual mechanism of bidirectional type inference: GHC looks at the return value's usages, generates and solves unification constraints to figure out its type, and then uses that type to search for an instance. It makes for a kinda magical developer experience: you write mempty and the machine figures out from the context which mempty you meant!

(Incidentally, that's why show . read :: String -> String is forbidden. show and read are type class methods, whose concrete implementation isn't known without any clues about the type at which they're being used. The intermediate type in show . read - the one you're reading into and then showing from - is ambiguous, so GHC doesn't know how to choose an instance dictionary in order to generate runtime code.)

So "return type polymorphism" is actually a slightly misleading term. It's really a by-word for a particular kind of type-directed code generation; its Core representation is just as a regular function whose return type can be determined from the type of its (dictionary) argument. In a language without type classes (or a language without types at all, like JS), you have to simulate type classes with explicit dictionary parameters that are manually passed around by the programmer, as @4Castle has demonstrated in another answer. You can't do type-directed code generation without types to be directed by!

Can you define Static and Dynamic Language without referring to run or compile times?

No dear, this is wrong.
Static and dynamic refers to types or variables, not the whole language itself.
Plus, CSS and HTML are not real programming languages.

You could read this Difference between static and dynamic programming languages and find much more googling

What other languages are loosely-typed like JavaScript?

There is disagreement about exactly what "loose" or "weak typing means, however, in so far as commonly understood, "loose typing" refers to a language that has more forgiving typing rules, and might even implicitly convert types from one type to another.

From Wikipedia:

Liskov and Zilles defined a strongly-typed language as one in which "whenever an object is passed from a calling function to a called function, its type must be compatible with the type declared in the called function."

According to this definition, JavaScript is loosely-typed (ie. the opposite of strongly typed) because most of JavaScript operators will coerce their operands if necessary.

For example:

[2]-1      // 1 (equivalent to 2-1)
[2]+1 // 21 (equivalent to String([2]) + '1')
true && {} // No error, returns `{}`

Another example of a loosely-typed language example is C.

Note that JavaScript is also dynamically typed. This means that the type of a value is bound to the value and checked at runtime, instead of being bound to the variable and checked at compile time. In effect type checking is performed "dynamically" (ie at runtime) as opposed to "statically" (ie. at compile time).

Examples of dynamic languages:

  • ActionScript
  • Clojure
  • Lisp
  • Lua
  • Perl
  • Python
  • Ruby
  • Smalltalk

Are types and OO coupled?

A static type is a property of portion of a program (usually an expression) that the type checker will attempt to prove or disprove without executing the program, possibly while simultaneously inferring the property. (Actually, this is a bit off, dependent types and staged compilation make the truth a bit fuzzier than this). Seen broadly enough, a dynamically typed language has one static type - the type of syntactically valid expressions.

A dynamic type is a property of an "object" (not necessarily an OO object) that the runtime will check automatically during program execution. Every mainstream statically typed language has some dynamic types...e.g. the divide function is statically defined to take two numbers and return a number but dynamically defined such that the second number cannot be zero. There are statically typed languages for which "non-zero number" can be a statically proven type. In many statically typed languages (e.g. Java) non-null is a dynamic type whereas in Haskell it's a static type. Etc.

The two are somewhat related (they're both ways to prevent undefined behavior) but also so totally different that it is a source of confusion that the word "type" is used to mean both.

Both static and dynamic type systems predate OO and both have excellent non-OO languages to represent them (e.g. Scheme and SML). In fact, they predate computer programming as we know it. See the untyped and simply typed lambda calculii which date to the 1930's and 40's.

For a more in depth discussion about the differences see: http://web.archive.org/web/20080822101209/http://www.pphsg.org/cdsmith/types.html

For one approach to looking at some properties of both static and dynamic types see: http://james-iry.blogspot.com/2010/05/types-la-chart.html

Do dynamically typed and untyped mean the same thing? Well...untyped comes from the untyped lambda calculus. It's called untyped in contrast to other lambda calculii like the simply typed lambda calculus which have static types. So certainly dynamically typed languages are untyped in that sense. But...

The untyped lambda calculus, unlike a "real" programming language, has the property that any syntactically valid expression can "make progress" without any need to test for any properties that might lead to undefined behavior. There's no equivalent of dividing by zero, dereferencing a null, or sending a message to an object that can't handle it. So one could make the argument that the untyped LC does not have a dynamic type system even though it is untyped.



Related Topics



Leave a reply



Submit