Athenian's #1 goal is to provide end-to-end visibility on the software delivery pipeline so that engineering leaders can remove bottlenecks and improve their performance. Today we are proud to release
CI Insightsthat we compute from GitHub checks.
To enable those, we require the following 2 read-only permissions that you can grant us here:
From now on, the Velocity section includes a new section called
CIwhich contains 2 subsections:
Success Ratio. Among these 2, you can find insights such as:
- Success ratio per repository
- Run time per pull request
- Concurrency analysis
The views also include a
tablethat allows to go more fine-grained and dig into the metrics for each individual
GitHub check. For example, by sorting the table by
Run Time, you can focus on the GitHub checks that take the longest to run and track your progress in optimizing the corresponding part of your CI.
We currently fully support the following CI providers:
Travis. However, Run Time is not yet available for others including
Jenkinswhich will require a few more weeks of development as they use a different version of the GitHub API to send the checks to GitHub.
This week we are releasing a new table of Epics for accounts integrated with Jira. This view lets you appreciate the progress and effort on the new features your teams have been working on for a given time period. The powers of this table are multiple:
- Improve communication by making the development of the new work transparent.
- Detect features being stuck or paused by your engineering teams and for how much.
- Give more visibility and predictability on the ETAs of your new features.
- During retrospectives, identify the reasons why a final delivery has been delayed.
- Highlight potential flaws in product development workflows.
The table also offers metrics for each epic, answering questions such as:
- What is the progress of each feature?
- What is the Lead Time of a specific feature?
- What is the size of this feature in terms of issues and pull requests.
The Epic table becomes really useful when you structure your product development around epics and group tasks under those entities. Only then, you can have a fine-grained view of your features by expanding epics analyzing their corresponding issues.
For users enabling the Jira integration, Athenian now maps GitHub pull requests to the corresponding tickets to offer an additional series of insights. The new metrics are organized according to 3 main Engineering pillars.
- Velocity:how long does it take to get comparable work done?
- Quality:are our end users getting quality work?
- Outcome:what work are we delivering?
In this section, you'll find the original quantitative analysis of your software delivery pipeline and metrics such as lead time for features, pull request cycle time, or release frequency. If your goal is to remove bottlnecks and accelerate throughput, this is where to be.
The insights from the Quality section tell you about the reliability of your system. How many bugs in production have we faced this month? What is our Mean Time To Restore (MTTR) depending on the priority level?
This section also gathers leading indicators of good quality code - according to the best Engineering practices - such as smaller pull requests sizes or a high code review coverage.
Finally, the Outcome section draws multiple pictures of the output work delivered by the team. Where have we spent our time on? How much work have been dedicated to bug fixing vs. tech debt vs adding new features? What features? Which topics have concentrated the team's attention during this cycle? Those questions can be answered with the data from the Outcome section.
Release small, and frequent.
Now with Athenian you can't only measure your release cycle time but also understand its frequency. In particular if you have a service based architecture that depends on multiple releases, understanding your release frequency per repository will be useful to help you understand the true velocity at which you're moving and where potential bottlenecks are.
Understanding what is inside a release, how large they are (in terms of # of PRs or LoC impact) and how many contributors were involved is now possible from the new release table.
Code Reviews are invitations to share knowledge amongst the team and give feedback. This latest chart allows you to understand where code is being reviewed and where it's not yet.
We do this by breaking down repositories in reviewed and not reviewed Pull Requests.
We also provide you with a top level metric of "Pull Requests Reviewed (%)" which you can use to set internal goals, such as
"100% of Pull Requests reviewed on our web application, and an average of 80% of PRs reviewed across our whole organization".
As always you can jump to the Pull Request table and filter on "Not Reviewed" to see where you're skipping this best practice.
Or choose to filter on your offending repositories to understand why certain pull requests are being merged before they are reviewed.
Hot on the announcement of 4 new charts earlier this week we bring 2 new updates today.
We got valuable feedback from our users that while their Cycle Time across each stage is useful to quickly spot where the bottlenecks in their Software Delivery Pipeline are, it often prompts the question:
How do I know if my Average Cycle Time isn't dominated by outliers?
In the coming months you'll see more features being released that help you as an engineering leader to identify outliers (we released the default option to remove stalled PRs back in June). Which is why we're introducing Distribution Charts.
Now your Cycle Time in each stage (WIP, Review, Merge and Release) and your Lead Time, no longer just show their trend over time but also provide a distribution.
On the y-axis you find the # of Pull Requests and on the x-axis you find your PRs bucketed based on the time spent in each stage. This is done on a logarithmic scale so you can quickly spot your outliers.
Let me give you a great example where this is very apparent. The following company uses CI/CD across their repositories and almost always releases in less ±1 minute, but it's Average Cycle Time in their Release stage is 24 hours. With the distribution chart you can now see that while 97% of their PRs release in ±1 minute, a small number of PRs end up being significant outliers:
As an engineering leader, it's the outliers here you want to focus on and understand what happened there, and to either choose to ignore it or to adopt your practices or tooling to avoid these.
Code Bypassing Pull Requests
Athenian's users are diligent about using Pull Requests. However at times there are a hidden issues in our Software Delivery Pipeline that stops engineers from using PRs and instead commit directly. A common example is that locally all the tests passed and the engineer finds himself waiting for a long running CI check when they open a PR, so instead they choose to commit directly.
As engineering leaders we want to discover
"Why are we committing directly without a Pull Request?, this new table in the Work In Progress section shows you the repositories where this is most common and allows you to inspect the offending commits.
You spoke, we listened!We're releasing 5 new charts related PR size and activity.
Pull Request Size Insights
Small PRs, released often are the corner stone of high performance engineering teams. Till date we gave you insight into your avg. PR size and the ability to order your Pull Requests on size in the table. From our conversations with our users we learned that you'd like to be able to dig in even further and understand how you're doing in terms of PR size.
We've therefore introduced 3 new charts into the Work In Progress section.
- Distribution of # PRs based on lines of code, a left skewing distribution and few PRs in your long-tail is what you should be aiming for.
- Breaking down your PRs in 5 size buckets, it's the 500+ lines one you should aim to avoid whenever possible.
- The ability to dig into the largest PRs during the date range you've selected and their average lead team.
Pull Request Activity
You're now able to look at the quantity of Pull Requests broken down by date, repository and author. Quantity metrics are useful to help you understand where your biggest areas for improvements are. Be careful though to never use quantity metrics as a way to rank individuals, they do not work for that.
Excluding stalled pull requests, 5x performance improvement, updated stage badges and a new volume chart
Excluding stalled pull requests
Stalled pull requests are PRs that have had no activity for a long period of time.
For many organizations using Athenian their product experience was biased by seeing their backlog of stalled pull requests included in the charts, pull requests section and stage metrics.
These PRs take up your attention when most of the time, they don’t really influence the current work being done.
Being able to review stalled PRs is important because they usually do require an action, either to be closed, rebased, or picked up to work on again.
To solve this issue, and offer a lightweight user experience, we’ve by default excluded stalled pull requests and decided to include a checkbox in the calendar that allows you to include them.
How does it work?
By default, any pull requests that had any activity (created, reviewed, merged, released etc.) in the date range you selected will be included. If you choose to select “Include stalled pull requests” it will also include all pull requests that are open but didn’t have any activity during this period.
Performance improved by 5x in the last 2 weeks
During the last two weeks our team has worked hard to improve the loading time of our product. During this period we optimized indexes on the database, added new layers of caching, and continued the work on our transition from live computation to precomputed data. Important to note here is that even with caching and precomputed results your data is never older then 5 minutes.
Updated stage badges
Before this release the stage badges would should the number of pull requests that completed the stage. After observing how people are using Athenian we’ve decided to change them to be the number of pull of requests currently in that stage.
When you now see the number 4 here in this review stage, it means there are currently 4 pull requests that have started the review process but haven’t yet completed it.
When should you use this?
If you want to have a quick glance on how much work is pending in each stage.
New Volume Chart
We’re starting to add some ‘simple’ volume charts that will help you put in context other metrics (see Speed, Volume, Quality, and Impact). The first one we added is the # of pull requests created.
Today we released an important update that changes the behavior of the contributor filter. Before, if you had a contributor selected, the metrics would be calculated over all of the pull requests they were involved in. No matter if they were a reviewer, merger or author. Now when you filter on contributors, it will only include pull requests (and calculate their metrics) from which they are an author.
The motivation behind this change is that we wanted to allow teams to be fully in control over improving their metrics, now by only including pull requests which the team authors, this makes that possible.
Pro-tip:this change is particularly useful when you want to exclude pull requests authored by bots.
It's finally here! You're now able to define your teams in
Settingsand filter on them using the contributor filter.
Important to know:
- Any user is allowed to create, edit and delete teams
- One contributor can be part of multiple teams
- Anyone who is not a member of a team will be shown as "Other" in the contributor filter
- Create a team for your bots, allowing you to easily exclude them
- If you have an open-core repository, add all your company's team members to teams and use 'Other' to understand your community metrics
Contributor filter with teams
Frontend team selected