Looking for something else? Check the categories of JVM:
Akka Class loading Garbage Collection Google Guava Hibernate Java Java 8 Java bytecode Java collections Java concurrency Java I/O Java Instrumentation Java memory model JPA JUnit Maven Monitoring Off-heap Play Framework Scala async Scala collections Scala core Scala functional Scala OOP Scala syntax Scala tests Scala types Spring Data JPA Spring framework Spring Integration Spring security Spring Web MVC Tomcat
If not, below you can find all articles belonging to JVM.
The series about Scala types system continues. After last week's article about path-dependent types, it's time to discover another type-related feature - self-types.
Before I went to Scala I had never imagined that we could do such many things nothing but with a types system. Aside of higher-kinded types or type boundaries that we can easily find in other languages, Scala offers more advanced type features as path-dependent types covered below.
Duck typing and Scala aren't the words going well together. However with a special kind of types we can achieve similar effect that in dynamically typed languages.
Scala doesn't come with static keyword as Java does. However, with object singleton type it allows to define static properties and methods. It goes even further and thanks to object "class" lets us to build an instruction called companion objects.
Checked exceptions don't exist in Scala. However thanks to functional data structures we can manipulate expected errors differently.
At first glance, the try-catch block seems to be the preferred approach to deal with exceptions for the people coming to Scala from Java. However, in reality, this approach is not a single one available in Scala.
Who didn't encounter a question about helper classes ? For ones, creating them isn't legitimate since everything we can link to an object. For the others they're fully legal because they help to keep code base understandable. Scala comes with an idea that can make both sides agree - package objects.
Scala is known as more concise language than Java. One of points showing this characteristics are classes constructors that can be defined with a single line, alongside of class name definition.
Apache Spark inspired not only the last week's post about closures but also the one you're reading about quasiquotes - a mysterious Scala experimental feature those existence we can difficulty suspect in the first months of work with the language.
If you're reading this blog, you've certainly noticed its big interest for Apache Spark. One of first problems we encounter with this data processing framework is a "Task not serializable" error that is caused by a not serializable closure. In this post, outside of Spark's context, we'll focus on these specific functions.
In one of previous posts we've discovered Scala's laziness expressed with lazy operator. However it's not a single solution to implement it. Another one uses evaluation strategies covered in below paragraphs.
Scala's lazy instances generation can be helpful in a lot of places. It simplifies writing since we can declare an instance at right and common place and delay its physical creation up to its first use. In Java we've this possibility too, though, it's much more verbose than in Scala.
For loop was maybe the most used iterative structure in Java before the introduction of lambda expressions. With this loop we're able to write everything - starting with a simple mapping and terminating with a more complex "find-first element in the collection" feature. In Scala we can made these operations with monads and despite that, for loop is a feature offering a lot of possibilities.
At first glance Scala's pattern matching looks similar to Java's switch statement. But it's only the first impression because after analyzing the differences we end up with some smarter idea.
Scala's apply method is a convenient way to create objects without the use of new operator and thus, to reduce the verbosity just a little. Often, as for instance in the case of case classes, apply is accompanied by its opposite, unapply, used in its turn to build extractors.
Scala implicits have many drawbacks but in some cases their use is justified. They remain though one of the advanced concepts for a lot of newcomers and it's worth explaining them a little bit more in details.
Some time ago I covered in this blog the complexity of Scala immutable collections, explaining a little what happened under-the-hood. Now it's a good moment to back to this topic and apply it for mutable collections.
Multiple inheritance can lead to a lot of issues and one of the most known is the diamond problem where the compiler doesn't know which of inherited methods use. However in Scala we can use another structure to compose a class with several different classes, keeping us far away of the diamond problem. This structure is called mixin.
When a programming language provides operator overloading, the learning curve increases most of the time because of the syntactic sugar it brings. After all more of operations will be expressed as not meaningful (at least in first approach) symbols. Scala also comes with its own syntactic sugar that can be applied in a lot of places: sequences, functions or conversion.
Types and type-safety in Scala have a special privileged place. But this wide range of techniques to deal with them makes the language discovery more difficult. And at first glance one of difficult type-related concepts are higher-kinded types, covered in this post.