My Cabin in the Woods

by Jonathan Lent

Systems administrators today (Linux systems administrators, in the context of this post) have many valuable tools at their fingertips. After some initial time expenditures, learning curves, architectural decisions, dead-ends, inevitable frustration, and testing, the adoption of automation technologies can make day-to-day tasks much more productive, repeatable, iterative, and secure.

Continue reading “My Cabin in the Woods”

Shibboleth on Apache 2.4 Using Mixed Authentication Methods

by Jonathan Lent

Like many developers, application maintainers, and system administrators at Stanford, I’ve been focussing a lot of time lately on migrating legacy web applications to using Shibboleth (from WebAuth). Also like many, I’ve found Alex Tayts’ article Migrating away from WebAuth: practical steps very useful during this process. However, as straight-forward as that writeup is, it doesn’t account for one thing: the Shibboleth SP software is not perfect.

During a recent deployment, I found that by simply enabling the shib2 Apache module on systems with Apache 2.4 running, applications using multiple AuthTypes (e.g. WebAuth and basic authentication) were suddenly presenting a 401 (Unauthorized) error message. This was before adding any directives to use Shibboleth as the AuthType.

Continue reading “Shibboleth on Apache 2.4 Using Mixed Authentication Methods”

Using Shibboleth with nginx

Unlike Apache, nginx does not have a module like mod_shib interacting directly with Shibboleth daemon shibd. I will use a module ngx_http_shibboleth_module, which uses FastCGI protocol to talk to Shibboleth daemon through sockets. Shibboleth comes with two FastCGI modules:

  • FastCGI responder (shibresponder) that handles the HandlerURL
  • FastCGI authorizer (shibauthorizer) that acts as a filter and does the usual (authN, export assertions and authZ).

Of course, these modules have to be running alongside the shibd daemon. Let’s start with Shibboleth setup and configuration. I used Ubuntu 16.04 LTS (Xenial) for the setup described in the article, but it can be easily ported to other versions of Linux.

There are some guides around on how to get such configuration working, but all of them seem to be missing one or more crucial steps. This is the drawback which I tried to remedy with this article.

Continue reading “Using Shibboleth with nginx”

A simple powershell script to restore from a snapshot for a Windows EC2 instance (single volume)

For those of us who are accustomed to the “revert to snapshot” function in a VMware environment, it is quite a challenge to do the same on AWS.  The problem comes from the fact that you can’t manipulate the volume itself most of the time, and restoring basically means replacing the currently mounted volume of the instance.  Here’s a script I wrote for a group discussion we had here at TCG (Technology Consulting Group) to  simplify the restore process.
Continue reading “A simple powershell script to restore from a snapshot for a Windows EC2 instance (single volume)”

Carbon black custom rules

 

Bit9, now called Carbon Black (Cb) Protection Enterprise is a utility that will intentionally block any application that has not been authorized to execute on the system. Carbon Black protection is a tool for whitelisting, and allows the creation of rules to control file executions on monitored systems.  Stanford University IT is actively working on implementing Carbon black Protection in our  environment.  This is an additional security tool along with Firewalls and  anti-virus applications.

 

Carbon black works by continuously monitoring all file system activities happening on the server  and provides a real-time response and blocks potential threats. We can whitelist applications by creating event rules and custom rules, and in this article we will be elaborating on best practices for creating custom rules, and why and when we need them.
Continue reading “Carbon black custom rules”

Databases in the Cloud?

by Leroy Altman

You’ve probably already heard about “Cloud” technology, where a server can be hosted for you at some remote location.  Did you know that the Cloud has more than just servers?  You can also get a very easy-to-use database.  Amazon — yes, the same ones that sell you books and BBQ grills — has a service called Amazon Web Services (AWS).  And within this service is something they call Relational Database Services (RDS).  This means that you can have a Microsoft SQL database without having to bother with all the hassle of running a database server.

The cool part is that everything is done behind-the-scenes for you.  You simply need to provision a database and Amazon RDS takes care of the rest of the technical details.   You get a database username, a password and a URL to connect to.  With this information your application “sees” and uses it just as if it was a regular Microsoft SQL server.
Continue reading “Databases in the Cloud?”

What’s New in Windows Server 2016

Windows Server 2016 is the latest server operating system from Microsoft. It’s packed with new modern features that are designed for virtualization and the cloud while there are significant improvement in networking, security, storage and management. These new features (or feature improvements) will help take your organization’s infrastructure to the next level whether it is on premise, hybrid, or 100% in the cloud.

Some of the top new features that would most likely be affecting most organizations include the following:
Continue reading “What’s New in Windows Server 2016”

Hyperconverged Infrastructure (HCI)

The traditional data center, or server rack, consists of a combination of: compute servers, storage appliances, and networking equipment. Each piece of the stack is its own entity, or known as “silos”, and usually require a separate team to maintain the stack. So what happens when one of these entities fail? It usually involves a data center “fire” and tons of downtime. So how can we consolidate these silos and create a resilient and highly available data center? Hyperconverged infrastructure may be a potential solution we are looking for! (depending on your requirements, that is…)

Hyperconverged infrastructures combine compute nodes, storage, and networking into a bundled solution that create highly available and scalable clusters. This allows for elasticity, easy, and rapid growth of equipment. The servers usually contain a hybrid setup of flash (SSD) caching disks, and HDDs, although recently providing all flash/SSD configurations. Management traffic and data traffic is intelligently routed amongst the switch to avoid performance hits and bottlenecks within the cluster.
Continue reading “Hyperconverged Infrastructure (HCI)”

Built To Scale

by Allan Holtzmann

Cloud Architecture on a Budget

We have all heard repeatedly about the “cloud”, and how it can do amazing things for our IT efforts, reliability, and budget. Unfortunately, the details on how to accomplish improvements in all three areas are often sparse. It is easy enough to migrate your website to a cloud platform, but making your website resilient, reliable, and cost-effective at the same time is another matter altogether.
Continue reading “Built To Scale”

Avoiding a Faux-PAW: Ditching my Beloved Jump Host

by Jonathan Lent

Background

The Information Security Office (ISO) has created a set of Minimum Security Standards (MinSec), broken down with a matrix of applicability depending on the risk classification of a given system (low, medium, or high) [1]. Of these standards for high-risk systems, one requires the use of a dedicated admin workstation for administration (also known as a Privileged Access Workstation (PAW), or Personal Bastion Host (PBH)) [2]. Unlike many of the MinSec requirements, this standard doesn’t hinge so much on a technical implementation detail; rather, it requires a simple set of technical changes (firewall rules), along with a drastic change in daily workflows for folks like me.

Before I go into why shared jump hosts can be a source of risk, it’s important to maintain an open mind and reiterate why these setups can be useful. To be clear, by “jump host”, I mean a hardened server that is used to gain access to other resources. Without going into any implementation-specific details, jump hosts are often on the short list of machines (or the sole source) that can connect to various ports on endpoints. The machine ideally would be locked down to restrict authentication access (e.g. not only a carefully scrutinized list of users, but additional tweaks requiring specific protocols for access (e.g. Kerberos, rather than password authentication)). This is indeed more secure than some of the alternatives (e.g. world or domain-wide access to individual endpoints). With proper planning these hosts can be hardened to eliminate the potential for many issues outlined below. However, the weakest link in computer security often lies with the human element, and that’s one aspect of the shared model that cannot be ignored. Continue reading “Avoiding a Faux-PAW: Ditching my Beloved Jump Host”