TCG Staff

Alex, Linux Engineering
Bhakti, Operations
Dave, Windows (Comm Svcs)
James, Operations
Jeffrey, Cloud Operations
Jennifer, Client Success Manager
Kevin, Windows Engineering
Kimberly, Manager
Leroy, Windows Engineering
Mark, Operations
Michael, Linux Engineering
Noah Abrahamson, Director
Richard, Cloud Devops
Vedran, Cloud Devops

505 Broadway St
Cardinal Hall, MC 8823
Redwood City, CA

Google Maps

Integrating a Third Party Application with Stanford SSO

This article describes a very generalized outline of how to integrate a third party application with Stanford Identity using SAML. Since all applications are different, there cannot be a single recipe covering all possibilities. Some of the integration decisions depend on the authentication architecture of an application, specifics of the Stanford environment and the user base of the application.

This article is written for a wide audience and begins with the very basics, avoids most of the special terminology, but gradually gets more technical. If at any point you feel like you need assistance with setting up an integration, feel free to stop reading and submit a HelpSU ticket.

Single Sign-On

Single Sign-On (SSO) is an authentication scheme, which allows a user to login to multiple independent applications using the same set of credentials. At Stanford everyone is familiar with a Web Login page, where one enters his SUNet ID, password and requests a Duo prompt. The same login page shows up no matter which application one is logging into. If you need the same kind of login capability for a third party application or service, keep reading. 

Continue reading “Integrating a Third Party Application with Stanford SSO”

Active Directory in the Cloud: Coming Soon!

Would you like to build a new Windows EC2 server in Amazon AWS, and then join it to Stanford’s Active Directory domain?  You could do things like allow your users to log in using their SUNet IDs (i.e. WIN\[sunet]), or maybe you want to apply Group Policy Objects to your servers.  University IT is very close to being able offer AD in the Cloud, a project being led by Stanford’s Windows Infrastructure team.   

Our team, the Technology Consulting Group (TCG), helped with some testing.  TCG’s testing was done in a non-production AD environment, so the details below are subject to change.  However, here’s a quick overview for those that are interested.  Simply stated, there are two key requirements…

First off, IPv6: 

Traffic from your Windows server to the Domain Controllers is only allowed over IPv6, so you need to enable it within AWS.  Plus, for security reasons the Windows Infrastructure group has firewall rules in place, so they will need to explicitly grant your server access.  Here are the steps we used. 

  • Enable IPv6 at the VPC level (you will probably get assigned a /56 static range of IPs)
  • Next, out of that new /56 IPv6 range, assign a smaller /64 range to the AWS subnet where your Windows server will live. 
  • Give this /64 range to the Windows team.  They will add “permit” rules to their Domain Controller firewalls. 
  • Finally, go to your EC2 Windows instance and allow AWS to auto-assign an IPv6 address.  (This IP will be in addition to the private, and possibly public, IPv4 address(es) already in use by your machine) 

Next, IPSec Tunnel:

All traffic to/from the DCs must be encrypted.  To do so, your EC2 instance needs to create an IPSec tunnel using a UIT provided certificate.  Fortunately, the Windows Infrastructure team has a couple of PowerShell scripts to make this task very easy, and are located here:  https://code.stanford.edu/winfra/aws-ad-client/tree/master/scripts

  • First run this one:   
 ./Get-VaultCertificate.ps1  -vault_role_id "xxxxxxxxxxxxxxxxxxxx" -vault_secret_id "yyyyyyyyyyyyyyyyyy"  

This script reaches out to Stanford’s “Vault” system, and then using special ID values provided to you, it grabs a new certificate and installs it onto your EC2 machine in the Windows certificate store.

  • Then run:   
./New-MemberIPSecPolicy.ps1 -remote_ipscope "xyz:xyz:xyz:xyz::/56"

This one creates a new IPSec tunnel on your Windows EC2 instance.  It uses the above certificate and sets up an IPSec rule for traffic to the DCs.  (The specific /56 range for the DCs will be provided to you by the Windows team as well.)

Then you are done!  At this point, you can simply join your Windows server to the Stanford AD domain like you normally would, and then you can log into your Windows sever using your AD account!

System Engineer Job Requirement: Multilingual/PhD in Psychology

Being a System Engineer has never been easy to begin with.  You have to be well versed in technologies and constantly keeping up with the latest and greatest and keep updating your skillset.  However, to be an All Star System Engineer, you almost need a PhD degree in Psychology and speak multi-languages.

Although I am half kidding on the PhD in Psychology and speak multiple languages part, it’s really not too far from the truth.  A successful System Engineer must be able to implement technologies to help solve problems, at the same time, he/she must be able deal with the clients in order to fully understand their needs.  A good System Engineer should be a good listener who don’t always put their ideas before the client’s. A good System Engineer first listens to the client, gathers the information, pauses and digests the information, and then keeps listening.  It’s not possible to solve a problem without first finding out what the problem is. Too many people are very quick to jump to conclusion and start offering solutions. Often times, it’s as simple as listening attentively, that would make the client feel they are heard and it would save a lot of going back-and-forth.  If you don’t get anything else out of this article, just remember one word, “LISTEN.”

Another thing that a good System Engineer does really well is to be empathetic.  They don’t undermine what the client is telling them and what they must be going through.  The client comes to us with an issue. To us, the issue could be very minor or at least not a big deal (not the-end-of-the-world kind of big).  But to the client, it could be very stressful and anxiety provoking. We need to be mindful of that. We should first acknowledge that there’s an issue that the client is experiencing.  Understand their frustration or any anxiety that they might be having as the result of the incident. Assure them that we are on it. Come up with a game plan on how you would tackle the issue and share it with the client.  Give the client sufficient and frequent updates so that they know that you are working on it. When you think you have resolved the issue, be sure to check with the client to make sure that they are aware that the issue is resolved.  Have them test it before closing the case.

A good System Engineer should keep in mind that techies and the general end users don’t really speak the same language.  More often than not, general users are more likely to understand Martian than the technical mumbo jumbo that we’re accustomed to speaking in the IT world.  When communicate with clients, we need to watch out for the kind of language that we use in our message. Keep in mind that the client may not know what you mean by GB, TB, AD, FQDN, AZ’s, VPC’s, etc.  In client communication, we have to cater our message to the specific client. Some clients may be more technically savvy than others. In which case, you can use more of the technical language. However, for the ones that are more technically challenged, we have to keep in mind to use simpler plain English.  Be ready to do a little bit of handholding and a whole lot of translation to try to get your point across.

In summary, to be a good System Engineer requires good technical skills.  That’s almost a given. However, to be a great System Engineer, you have to be a good listener, and at the same time, to be empathetic to the clients in order to truly understand what the clients need.  Try to use plain English instead of technical jargons in communication with clients. If you’re able to speak their language and see things from their perspectives, you can win them over and gain their trust.  If you can do that, you’ve won half of the battle and well on your way to a successful relationship!

Efficient Data Replication with ZFS

This year, I have been working with one of our clients on a typical research-oriented server setup, including a few compute servers mounting a single shared storage over NFS, which is a common and well-tested configuration. The main difference for this project was the size of the storage. At the time when our team became engaged with this client, they were using ten assorted storage servers based on Linux and FreeNAS. In order to replicate the data between these servers, rsync was being used. Additionally, an elaborate scheme was in place to make sure that each dataset is housed on at least two different servers. All of the storage servers were outdated and out of warranty, so the client agreed to procure new hardware and build a new setup from scratch.

Following the example of Research Computing team, Ubuntu was selected as a base operating system for both compute and storage servers deployed on commodity Supermicro hardware resold by Colfax. Cost-effectiveness of the deployment was deemed as a decisive factor by the client. To achieve the maximum storage density, a client opted for a single 60-drive primary storage server with ZFS file system. ZFS brings with it all the advantages of a copy-on-write file system with features, such as instant copy, snapshots, flexible volume management, built-in NFS sharing, error resilience and correction.

Below I am going to discuss SEND/RECEIVE feature of ZFS, which allows to easily and efficiently replicate large volumes of data.

Continue reading “Efficient Data Replication with ZFS”

Static Fling

There are a lot of different Content Management Systems available for publishing websites these days. Gone are the days when you had to learn Hypertext Markup Language to create a simple web page, or craft a comprehensive set of Cascading Style Sheets to redesign your website look and feel. A Content Management System provides a powerful environment to help you to build and maintain your website, regardless of size.

Unfortunately, all that power comes at a cost. The CMS itself becomes another component you are required to power and maintain. The tool you selected to help you build your website can get in the way of you focusing on the task it was meant to help with. Worse yet, a CMS usually creates overhead when displaying your website to visitors, which can lead to scaling issues and possible downtime in the event of a traffic surge. Continue reading “Static Fling”

Migrate a MS SQL cluster with a shared RDM disk in a VMware environment

by David Fong

We had a need to migrate a MS SQL cluster with a shared RDM disk in a VMware environment to a new storage for both the OS disks and the RDM.  The two nodes on the clustered are located on different ESXi hosts.  We put the database files and logs on the RDM disk other than the OS is on a VMFS datastore.  It was not a very straight forward migration that involves un-mapping and re-mapping RDMs, coping the databases and all the related files, and finally migrating the OS drives.

Continue reading “Migrate a MS SQL cluster with a shared RDM disk in a VMware environment”

Backing up directly to the Cloud using Cloudberry

by Bhakti Chokshi

TCG now offers CloudBerry as a cloud backup alternative when we build servers for our clients. It is a low-cost, month-to-month managed cloud backup as a service. TCG can even provide CloudBerry as a standalone service for the systems we do not proactively manage with a support contract.

With this new offering that supports several Operating systems such as Windows, Mac, Linux, TCG installs backup software on the server, setup and manage the cloud storage account, tunes the backup strategies,  closely monitors the progress of any backups, troubleshoots any errors and perform any restores.

TCG is excited about this new offering as it aligns with the UIT multi-year cloud initiative.

Some of the interesting features of CloudBerry:

Continue reading “Backing up directly to the Cloud using Cloudberry”

Shibboleth Authentication on IIS

by Leroy Altman

As you may have heard, Stanford is moving away from their in-house created authentication software known as “WebAuth” to an industry standard Open Source technology called SAML2.  Software called “Shibboleth” is available to leverage SAML2 and it includes a version created for Microsoft’s Internet Information Server (IIS) web server running on Windows.

This article was gathered from two great sources listed below, and I encourage you to read both for more details.  This article is really just the tip of the iceberg:

There are two new terms to know:

  • Identity Provider (IDP):  This is Stanford’s central authentication service
  • Service Provider (SP):  This is your web server

Installation:  This is a quick summary of how to get Shibboleth installed and working on a Windows IIS web site.

Some prerequisites:

  • Windows Server 2012 R2 w/ IIS installed.
  • In addition to the default IIS modules, you’ll also need to add Management Compatibility components:
    • IIS 6 WMI
    • IIS 6 Metabase compatibility
    • IIS 6 Scripting tools
    • IIS 6 Management Console
  • Install ISAPI filter and Extensions [located in Web Server (IIS) → Web Server → Application Development]
  • A “Default Web Site” which has a default page, used for testing.
  • A “/secure” subfolder under the root, also with a test page.
  • An SSL certificate installed and working on the website.

Run the Shibboleth Installer.  The most recent version, as of this writing, is here:   https://shibboleth.net/downloads/service-provider/2.6.1/win64/

The defaults for installation are typically fine to use:

Continue reading “Shibboleth Authentication on IIS”

Granting User Access Without Granting User Access in Windows

by Kevin Tai

I recently had a client who hired a consultant to work on a special project to update their website.  The client initially requested to allow the consultant access to a file share on the server where the website is hosted so that he can update the files.  Then the consultant realized that he needed additional access like restarting the services for the website’s Prod and Dev environments. We could’ve lazily grant him Remote Desktop access to the server and call it the day, but that would be giving him more access than he really needs.  All he really needs to do is to be able to restart 2 services (the production web server and the dev web server) after he makes updates to the environments.

That got me thinking that there must be an alternative way to accomplish this without giving up too much access.  So, I designed a process that would do just that and here’s how it works…

Continue reading “Granting User Access Without Granting User Access in Windows”