Wednesday, December 11, 2024

The 2024 State of DevOps Report and the Importance of Internal Development Platforms

On the State of DevOps Report

The State of DevOps Report, published annually by the DevOps Research and Assessment (DORA) team, has been a cornerstone in the field of software development for over a decade. The 2024 version of the report was published this past October. The 2024 version as well as the past reports can be found here. Since its inception, the report has gathered insights from more than 39,000 professionals across various industries and organizational sizes. This extensive sample size ensures a comprehensive understanding of the DevOps landscape. The report is highly regarded for its in-depth analysis of the practices and capabilities that drive high performance in software delivery and operational efficiency. It serves as a benchmark for organizations aiming to improve their DevOps practices, providing valuable data on key performance indicators such as deployment frequency, lead time for changes, change failure rate, and mean time to recovery 6.

The primary goals of the State of DevOps Report are to:

  • Identify Best Practices: Highlight the practices that distinguish high-performing teams.
  • Provide Benchmarks: Offer metrics that organizations can use to measure their performance.
  • Drive Improvement: Encourage continuous improvement in software delivery and operational practices.
  • Highlight Trends: Explore emerging trends and challenges in the DevOps field, such as the impact of AI and the role of platform engineering.

In essence, the report aims to equip organizations with the knowledge and tools needed to enhance their DevOps capabilities and achieve better business outcomes.

On Internal Development Platforms

The 2024 State of DevOps Report highlights the growing significance of internal development platforms. These platforms are not just a trend but a cornerstone for enhancing efficiency, speed, and security in development. One of the key insights from the report is that "Teams do not need to sacrifice speed for stability" 2. Having an internal development platform helps with that.

One of the key takeaways from the report is how internal development platforms empower developers by providing self-service tooling environments. This flexibility allows developers to work more efficiently and achieve their goals without unnecessary delays. As the report states, "Self-service tooling environments give developers the flexibility they need to work quickly and achieve their goals" 1. This empowerment is crucial for fostering innovation and maintaining a competitive edge.

The report also emphasizes the role of internal development platforms in streamlining key processes through standardized automation. By eliminating repetitive tasks, these platforms accelerate delivery and improve overall productivity. "The full potential of DevOps is unlocked with standardized automation" 1 the report notes. This streamlined approach not only saves time but also reduces the likelihood of errors, leading to more reliable and consistent outcomes.

Security is another critical aspect highlighted in the report. With secure tools built into the platform, internal development platforms ensure that security is integrated into the development process from the ground up. This proactive approach to security benefits everyone involved. "Security has never just been IT's job. With secure tools built into most platforms, Platform Engineering is empowering teams to build securely from the start" 1. This integration helps in mitigating risks and protecting sensitive data.

Ultimately, the report underscores that the adoption of internal development platforms is a strategic move that supports organizational success. By fostering a culture of efficiency, speed, and security, these platforms help organizations achieve their goals more effectively. The report concludes, "The rise of developers is supported by the platform engineering team" 1. This support is vital for driving innovation and achieving long-term success in the competitive tech landscape.

The 2024 State of DevOps Report makes it clear that self-service internal development platforms are not just beneficial but essential for software development to allow teams to move quickly and independently.

On Paved Roads and Golden Paths

In the realm of software development, the concepts of "paved roads" and "golden paths" have become integral to the success of internal development platforms. These terms, often used interchangeably, refer to the pre-defined, optimized workflows and best practices that streamline, and expedite, development processes, making life easier for developers and enhancing overall productivity.

At their core, paved roads and golden paths are about providing developers with a clear, supported route to follow. As Raffaele Spazzoli from Red Hat explains, "Golden Paths are a fundamental ingredient of well-architected Internal Developer Platforms" 3. These paths offer pre-architected and supported approaches to building, testing, and deploying software, ensuring that developers can focus on coding rather than the intricacies of the underlying hosting infrastructure and CI/CD processes.

One of the primary benefits of these paths is the reduction in cognitive load for developers. Kaspar von Grünberg, CEO of Humanitec, describes golden paths as procedures "in the software development life cycle that a user can follow with minimal cognitive load and that drives standardization" 4. By minimizing the mental effort required to navigate complex systems, developers can work more efficiently and effectively. These paths allow developers to spend less time dealing with CI/CD, infrastructure, and other development processes and more time on creative problem-solving, their productivity and morale are likely to improve. The 2024 State of DevOps Report notes that "self-service tooling environments and standardized automation streamline processes, making it easier for developers to achieve their goals without unnecessary delays" 2. This reduction in friction and the ability to work more efficiently contribute to a healthier work environment, ultimately helping to prevent burnout and improve overall job satisfaction.

Golden paths are not just about making things easier; they are about making things better. They provide a route toward alignment and standardization without forcing developers into a rigid framework. As noted in a blog post by Octopus, "Instead of forcing developers to do things a certain way, you design the internal developer platform to attract developers by reducing their burden and removing pain points" 5. This approach fosters a more productive and innovative environment.

The focus on long-term benefits is another critical aspect of golden paths. According to Spazzoli, "The more sophisticated Golden Paths are, the more they will be adopted, providing, as a result, more uniformity of configuration and behavior across the application portfolio" 3. This uniformity not only improves the quality of the software but also makes it easier to maintain and scale.

Paved roads and golden paths are essential components of internal development platforms. They reduce cognitive load, enhance productivity, and provide long-term benefits by fostering standardization and alignment. As the software development landscape continues to evolve, these concepts will remain crucial for organizations looking to streamline their processes and support their development teams effectively.

Conclusion

The 2024 State of DevOps Report underscores the transformative impact of internal development platforms and the strategic importance of paved roads and golden paths. By providing standardized, self-service tooling environments, these platforms empower developers to work more efficiently and securely, fostering innovation and maintaining a competitive edge. The report highlights that "Teams do not need to sacrifice speed for stability," emphasizing that streamlined automation and integrated security are key to achieving high performance in software delivery.

Paved roads and golden paths reduce cognitive load, enhance productivity, and ensure consistency across development processes. They offer a clear, supported route for developers, allowing them to focus on creative problem-solving rather than the complexities of CI/CD pipelines. This approach not only improves job satisfaction and prevents burnout but also drives long-term success by fostering standardization and alignment.

In essence, the State of DevOps Report provides invaluable insights and benchmarks that help organizations enhance their DevOps capabilities. By adopting internal development platforms and leveraging paved roads and golden paths, organizations can achieve better business outcomes, support their development teams effectively, and stay ahead in the competitive tech landscape. As the software development field continues to evolve, these concepts will remain crucial for driving innovation and operational excellence.

References

  1. 2024 State of DevOps Report: The Evolution of Platform Engineering
  2. 2024 State of DevOps Report
  3. Red Hat Blog - Designing Golden Paths
  4. Platform Engineering Blog - How to pave golden paths that actually go somewhere
  5. Octopus Blog - Paved versus golden paths in Platform Engineering
  6. Middleware Blog - Only Hard Questions: Exploring the 2024 State of DevOps Report with Lead Investigator Derek DeBellis
  7. Only Hard Questions: Exploring the 2024 State of DevOps Report with Lead Investigator Derek DeBellis

Sunday, July 14, 2024

Demystifying PowerShell: Method Overloading, Inheritance, and Type Casting

PowerShell, with its object-oriented features, allows developers to create robust and flexible code. In the first article in this series, Object Oriented PowerShell, I discussed the basics of classes in PowerShell. In this post, we’ll explore a couple of code snippets that showcase method overloading, method overriding and inheritance. Our journey will take us through the intricacies of PowerShell classes and their behaviors.

The Code

Here we have 2 classes one derived from the other. As you can see, some of the methods in the base class are overridden in the derived class.
class Base
{
    [void] Foo()
    {
        Write-Host 'Base.Foo'
    }

    [void] Bar()
    {
        Write-Host 'Base.Bar'
    }

    static [void] Bar2()
    {
        Write-Host 'static Base.Bar2'
    }

    [void] Bar2([int] $i)
    {
        Write-Host "Bar2 int = $i"
    }

    # Overloaded method in the same class
    [void] Bar2([double] $f)
    {
        Write-Host "Bar2 double = $f"
    }
}

class Derived : Base
{
    [void] Foo()
    {
        Write-Host 'Derived.Foo'
    }

    [void] Baz()
    {
        Write-Host 'Derived.Baz'
    }

    static [void] Bar2()
    {
        [Base]::Bar2()
        Write-Host 'static Derived.Bar2'
    }

    # Overloaded method in the derived class
    [void] Bar2([string] $s)
    {
        Write-Host "Bar2 string = $s"
    }
}

Let's now call some of the methods and see if the behavior is what we would expect. Let's start with making an instance of the base class and calling its methods. 
# Create an instance of Base
$base = [Base]::new()
# Call the methods of Base
$base.Foo()
$base.Bar()
[Base]::Bar2()
# Here we are calling the overloaded Bar2 method
$base.Bar2(5)
$base.Bar2(5.3)
And the results are what we would expect.
Base.Foo Base.Bar static Base.Bar2 Bar2 int = 5 Bar2 double = 5.3
Now let's do the same with an instance of Derived.
# Create an instance of Derived
$derived = [Derived]::new()
# Call the methods of Base. These all behave as expected
$derived.Foo()
$derived.Bar()
$derived.Baz()
[Derived]::Bar2()
$derived.Bar2(5)
$derived.Bar2(5.3)
$derived.Bar2("Hello World")
And the results are also what we would expect. Ther are some things to note. The most obvious one is that we call the static method Bar2() in Base as part of the call to the overridden static method in Derived. The syntax for this is straight forward. However, this is not way to call an overridden non-static method in a base class.

Additionally, we can add more overloads to a function in a derived class, as we did with the function Bar2 that accepts a string in Derived. 
Derived.Foo
Base.Bar
Derived.Baz
static Base.Bar2 # we correctly call the base classes Bar2 method here
static Derived.Bar2
Bar2 int = 5
Bar2 double = 5.3
Bar2 string = Hello World # This overload only existed in Derived and is called correctly here
One other thing to note, is that you can call a method in a base class that only exists in a derived class. This provides a way to provide a base class that you are required to derive from. Here is an example of that.
class Base
{
    [void] Foo()
    {
        $this.Bar() # Note there is no Bar method in Base
    }
}

class Derived : Base
{
    [void] Bar()
    {
        Write-Host "Derived.Bar"
    }
}
And the expected output. 
Derived.Bar
A few things to note here, as before if there is a hierarchy of classes, the method on the most derived class will be called. This example also shows how to call member methods (or access member data) it is done via '$this.'.

Conclusion

As you can see from this post and the last one, PowerShell provides the ability to write Object-Oriented code. There are, as noted, some limitations, such as the lack of true private methods and data.

PowerShell’s object-oriented capabilities empower developers to build expressive code. Understanding method overloading, inheritance, and type casting is essential for creating maintainable and efficient classes.

Feel free to ask any questions or share your thoughts! 😊

Sunday, December 17, 2023

Object Oriented PowerShell

I recently discovered that PowerShell supports classes, so I started to wonder if I could code using standard Object-Oriented techniques in PowerShell. Starting with PowerShell 5.0 classes are supported. If you are going to use classes in PowerShell, I strongly suggest that you use pwsh 7.0 or above. Support there is much better than in the earlier versions.

In this series of posts, I am going to discuss how object-oriented programming with PowerShell, what works well, and were, shall we say, there is room for improvement.

As you probably know there are three principles of object orientation: encapsulationinheritance, and polymorphism.

Let's start our discussion with encapsulation, the ability bundle data and methods into a single unit, typically a class.

We can encapsulate data and methods inside of a class in PowerShell.

class Foo {
Foo([int] value) {
$this.value = value
}

[int] GetValue()
{
return $this.value
}

hidden [int] $value
}

Class Foo has one data element and two methods, a constructor and a getter. One of the limitations of PowerShell is that it is not possible to hide the internal state of a class. This is typically done by determining which parts (data and methods) of the class comprise the public interface and which data and methods are internal implementation details that should not be accessed outside of the class or derived classes.

In the case of Foo, if the intent is that the value is only settable at object construction, there is no way enforce this. 

But, what about the keyword `hidden` before the data member, you may ask. I find the keyword hidden to be a good convention for letting the users and maintainers of the class know that the intent is to keep the method or data member, private. However, hidden data elements and members are public and therefore accessible outside of the class. The keywork hidden just hides the data member or method when you use the Get-Member cmdlet on the class.  

 In this area PowerShell is lacking. There is no way to truly make methods or data private (or protected) to the class.

I find that when coding one of the best pieces of advice was given by Dr. Seuss in Horton Hears a Who. “Say what you mean and mean what you say”. What I mean by that is code should express the intent, and the intent expresses should be what is intended.  In the case of object-oriented languages, data and methods that should only be accessed by the class should be private, those that can be accessed by derived class should be protected, and only those that are available to everyone should be public. Unfortunately, in PowerShell there is no way to indicate something is protected and hidden is more of a suggestion than something enforceable for private data and methods.

In PowerShell, we can encapsulate, but we cannot protect the class internals from access by outsiders. You will need to decide if this works for your project. Next up: inheritance.

Monday, May 23, 2022

SQL RDBMS: Don't cross the streams

As Egon Spengler (Harold Ramis) said in Ghostbusters, "don't cross the streams", while in the SQL realm it thankfully won't cause total protonic reversal, it still is something that should be avoided.

DDL and DML are two different types of SQL Statements that can be sent from a client to the database. DDL stands for Data Definition Language, these are the statements used to create objects in a database, tables, triggers, stored procedures, etc. DML, Data Manipulation Language, is used to manage the data in tables via select, update, insert, and delete. 

tldr: When writing applications that use a SQL database, the SQL logins used by the applications should not execute DDL.

Architecturally, it is best to isolate the user that is creating objects in the database from the users that executing DML queries. This follows both the principles of Least Privilege and Separation of Concerns. Unless you are working with an application that manages databases, the deployment of the app should execute DDL and the application(s) that interact with the database should not. 

You may have noticed I said user (singular) that is creating/updating the objects in the database. As part of the deployment there should be a single user that owns all the objects in the schema/database because this will avoid broken ownership chains. Having a single owner for all objects means that only that user needs to be granted permissions to create objects in the schema/database, the other users, only need writes to access some or all of the objects in the database. 

As with all guidelines there are a couple of caveats. Many deployments will record information about the currently installed version in a table, so the install will run DDL for this. Additionally, when running the DDL to create the objects, the deployment may need to put default data in some tables.

Application logins may need to create temporary tables, although all else being equal, I'd rather see that in a stored procedure or function if possible. Putting that logic in a stored procedure or function creates an API for the database, which makes it easier to refactor the database without impacting the applications that use it.

Lastly, you may have noticed that the discussion the above elides away the difference between schema and database. In general only a single schema is needed in a database. However, on rare occasions, more often in research than in business, there may be reasons to create multiple schemas in a database. This will increase complexity and should be avoided unless needed. When multiple schemas are needed, objects should still only be created as part of the deployment/ Each schema may have its own owner, or all schemas and their deployed objects may be created and owned by a single SQL User.

Not following the above guidelines leads to unnecessary complexity that affects the maintainability of the database and the applications that use the database. Also, it often leads to unintended consequences because of that complexity.

Happy Coding.

Monday, January 31, 2022

The Zen of Coding - Separation of Concerns

Continuing with my series on the Zen of coding, meta rules that apply universally regardless of coding language or style in use, today I want to discuss Separation of Concerns. 

Separation of Concerns is a design principle for partitioning a system into discrete logical elements. Each part of the system should focus on a single concern, which is not shared with other parts of the system. The term Separation of Concerns was probably coined by Dijkstra in 1974. This principle applies at all levels of a system, from the conceptual and logical models, down to the physical level. Again here, as with Magic Numbers, the benefit of following this design principle at the code level is more readable, and hence more maintainable code. 

Robert Martin's Single Responsibility Principal, is another wording of the same design principle. When you properly separate concerns each component, class, or module has a single responsibility.

If you don't follow the principle of Separation of Concerns over time you end up with a monolithic design, which are harder to read, harder to refactor, and harder to maintain. The answer is of course to refactor when you find there is no longer a separation of concerns. This is just one more refactoring pattern that you should be applying daily as part your work. It is part of the reason why refactoring needs to be part of the daily development cycle and not a separate story or event

Separation of concerns applies at all levels of the design from the details of the code, to classes, as well as to how the system is partitioned into executable modules.

Separation of concerns strongly relate to the concepts of Coupling and CohesionCoupling and Cohesion are concerned with the degree of dependency within and between modules. We should always minimize dependencies between modules (Coupling), and we should be looking for high inter-relationship of functionality within a module (Cohesion). Module may be a class construct or something higher level. Again here, Separation of Concerns is going to lead to low Coupling and High Cohesion by keeping single concerns focused together and separate ones independent.

Zen of Coding Rule: Always separate concerns. 

Happy Coding.


Previous posts in this series:

Wednesday, January 12, 2022

The Zen of Coding - Magic Numbers

I have been coding professionally now for over 30 years in a large variety of languages. There are a number of practices that I have found to be universally best practices regardless of the programming language or programming style in use. Many of these relate to how readable code is, which is the 2nd most important aspect of code, right behind correctly functioning code.

Readable code is more easily maintained, and I have yet to run into the app that I didn't need to revisit and update. In general, code is read an order of magnitude more often than it is written. It is one of the reasons why coding style guidelines are important if there is more than one author working on a project, either concurrently or across time. With the growth of open source and more companies using agile teams, it is rarer and rarer to be the sole author on a project. 

The Zen of coding are these meta rules that apply universally. Many of these items are noted in other places on the internet and in books. You can look at this as my list of favorite coding pet peeves to avoid. Let's, start with the first one magic numbers.

What is a magic number?

A magic number is a literal numeric value in code, outside of a assignment of that value to a named constant. For example 22/7 in the following code is a magic number.

float circularArea radius * radius * 22/7;

Looking at this code, most people are not going to know what 22/7 is. A better way to write the code would be

float circularArea = radius * radius * Math.Pi;

Here we can use a predefined constant as opposed to defining one ourselves. Let's consider a program dealing with force and acceleration. When you find 9.8 sprinkled all over the code you might be able to deduce from the code that it is the value of gravitational acceleration. It would be much better to declare the constant and use the name of the constant everywhere else in the code. For example:

const float gravitationalAcceleration = 9.8;

Doing this not only increases the readability of the code, but also has the added benefit of increasing maintainability. When a more precise value for gravitational acceleration is needed, such as 9.80665, the code only needs to be changed in one place.

Some people have raised a concern that the extra assignment can affect performance. I have yet to run into the case where this is the case. I have gone so far as to prove this by looking at the compiled assembly language to prove it.

Zen of Coding Rule: Use constants whose name has semantic meaning instead of numeric literals. 

Happy Coding.

Tuesday, June 8, 2021

Working with JSON in Powershell - Collection of hashtables

Some time you interact with a rest API, which returns a block of json containing a collection of items that you want to enumerate over. If the collection of items is not in an array, but looks like the following, it is challenging to iterate over the collection.

{ "firstItem" : [ "one", "two" ], "next" : [ "yes", "no", "maybe" ], "6739" : [ "red", "blue", "green" ] }

Of course we could say that the author of the rest api should change the returned result to make it easier to interact with, but that is not always within our control.

In PowerShell we can make this collection enumerable with a few extra lines of code. 

If you are working with a newer version of PowerShell (version 6 or newer) you can use the -AsHashtable parameter to ConvertFrom-JSON

If that is not the case, you can still make the json response enumerable as shown below. 

$json = ... response from the api $data = $json | ConvertFrom-Json $enumerableData = $data.psobject.properties | Select-Object Name, Value foreach ($element in $enumerableData) { Write-host "$($element.Name)" foreach ($element in $element.Value) { Write-Host "`t$element" } }

The output from that is:
firstItem one two next yes no maybe 6739 red blue green
It is a nice little snippet that gets the job done.

The 2024 State of DevOps Report and the Importance of Internal Development Platforms

On the State of DevOps Report The State of DevOps Report, published annually by the DevOps Research and Assessment (DORA) team, has been a c...