Thursday, April 13, 2023

Designing a secured Landing Zone in AWS

When adopting cloud computing, securing your cloud infrastructure should be a top priority. Starting with a landing zone provides the foundational infrastructure for all workloads and applications. This should also include designing for security in your landing zone.

In this post, we'll discuss how to design a secure landing zone in AWS and the best practices to follow.

1. Create a multi-account structure: Creating a multi-account structure is a best practice for securing your landing zone in AWS. This allows you to separate workloads, limit blast radius, and apply specific security controls to each account, or set of accounts. You can use AWS Organizations to create and manage multiple accounts in your AWS environment.

2. Define your security requirements: Before designing your landing zone, you should first define your security requirements. This will help you determine what security controls you need to put in place. Identify the type of data you will be storing (i.e. data classification), who will have access to it (i.e. data access), and the compliance regulations you must comply with (i.e. data compliance). The security controls are applied in what is referred to as the Security Baseline or Layer.

3. Use AWS Identity and Access Management (IAM): IAM is a service that enables you to manage user access and permissions to AWS resources.  You should use IAM to enforce the principle of least privilege and ensure that users only have access to the resources they need. You should also enable multi-factor authentication (MFA) for added security. These service resources are applied in what is referred to as the Identity Baseline or Layer.

4. Implement encryption: Encryption is the process of encoding data so that only authorized parties can access it. You should encrypt all sensitive data at rest and in transit. AWS offers a variety of encryption options, including Amazon S3 encryption, AWS Key Management Service (KMS), and AWS Certificate Manager. The encryption key service is applied in the Data Protection Baseline or Layer.

5. Use AWS Config and AWS CloudTrail: AWS Config and AWS CloudTrail are services that provide visibility and auditing capabilities for your AWS environment. AWS Config helps you monitor resource configuration changes, while AWS CloudTrail provides a detailed record of all API activity in your AWS account. These services are applied in the Logging Baseline or Layer, and are dependent upon the Data Protection Baseline to provide the encryption keys necessary to encrypt log data.

6. Implement network segmentation: Network segmentation is the process of dividing your network into smaller, more secure segments. This helps to prevent lateral movement and limit the impact of a security breach. You can use Virtual Private Cloud (VPC) to create network segments in AWS. This capability is applied in the Network Baseline or Layer, and is dependent upon the previous deployed Logging Baseline and Data Protection Baseline to provide logging and encryption keys respectively.

7. Implement automated security checks: You should implement automated security checks to ensure that your landing zone remains secure over time. AWS provides a range of automated security tools, including AWS Security Hub, AWS Config Rules, and Amazon Inspector. This capability is implemented in the Compliance Baseline or Layer.

In conclusion, designing a secure landing zone in AWS requires careful planning and attention to detail. By following the best practices outlined above, you can create a landing zone that is secure, scalable, and easy to manage. Remember to regularly review your security controls and update them as necessary to keep up with changing security threats.

Monday, March 13, 2023

Understanding Cloud Foundations

 “Cloud foundations" generally refers to the fundamental building blocks or components of cloud computing infrastructure. There are a few key aspects typically included in cloud foundations, such as:

  1. Virtualization: Cloud computing relies heavily on virtualization technology to provide a layer of abstraction between physical resources (like servers, storage devices, and networks) and the software applications that use those resources. Virtualization allows for more efficient use of resources, greater flexibility, and easier management of cloud environments.
  2. Infrastructure-as-a-Service (IaaS): IaaS is a cloud service model that provides virtualized computing resources over the internet, including servers, storage, and networking. Cloud providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) offer IaaS services that customers can use to build and deploy their own applications and services.
  3. Platform-as-a-Service (PaaS): PaaS is another cloud service model that provides a platform for developing, testing, and deploying applications without having to worry about the underlying infrastructure. PaaS providers like Heroku, Google App Engine, and AWS Elastic Beanstalk offer preconfigured platforms that include operating systems, programming languages, databases, and other tools.
  4. Software-as-a-Service (SaaS): SaaS is a cloud service model that allows users to access software applications over the internet, typically through a web browser or mobile app. Examples of SaaS applications include Google Workspace, Salesforce, and Microsoft Office 365.
  5. Cloud security: As with any computing environment, security is a critical consideration in the cloud. Cloud providers offer various security features and services to help customers protect their data and applications, including encryption, access controls, firewalls, and threat detection.

Overall, cloud foundations are the underlying technologies, services, and practices that enable cloud computing to work effectively and efficiently.

If you want to learn more about cloud foundations and what a practical implementation entails, stay tuned here as I will be detailing the components that would go into a real-world cloud deployment.

Wednesday, March 08, 2023

What is a landing zone and why you should be using one

Cloud computing has become so common place that the question is no longer “should?”, but “why not?”. Now as you, like so many others, begin your journey to the cloud, trying to decipher the many technical terms and jargon, and ensuring you follow best practices for security, cost efficiency and operations, you will likely come across the term, “landing zone”. So what is a landing zone, and why would I need one? A landing zone is a best practice in cloud computing for establishing a secure and well-architected foundation that can help you scale and manage your cloud environment effectively. You can definitely get started without one, get your services up and running and be productive. But unless you are a one-person shop, you are more typically going to run into challenges at some point, and start asking questions like:

  • How do I better isolate my environments for improved security and protection from mistakes?
  • How do I provide the appropriate access to persons, making sure they have the right access level, to the right things?

There are several reasons why a landing zone is important when starting out in cloud computing:

  1. Security: A landing zone provides a secure foundation for your cloud environment by establishing security controls and best practices from the outset. This helps to reduce the risk of security breaches, data leaks, and other security incidents.

  2. Compliance: A landing zone can help you meet compliance requirements for your industry or region, by establishing policies and controls that are specific to your compliance needs.

  3. Scalability: A landing zone provides a scalable foundation that can grow with your cloud environment. By establishing a set of repeatable patterns and configurations (or blueprints), you can reduce the time and effort required to deploy new workloads and applications in your cloud environment.

  4. Cost optimization: A landing zone can help you optimize costs by establishing cost controls and best practices from the outset. By implementing cost optimization strategies early on, you can avoid common cost pitfalls and ensure that your cloud environment is cost-effective over the long term.

  5. Management and governance: A landing zone can help you establish management and governance policies that are specific to your business needs. By creating a set of standardized practices for deploying and managing resources in your cloud environment, you can ensure that your environment is consistent, well-organized, and easy to manage.

Overall, a landing zone provides a foundation for a secure, compliant, scalable, and cost-effective cloud environment. It can help you get started with cloud computing on the right foot and avoid common pitfalls and challenges that can arise in cloud environments.

If you found this useful, please follow along as I will provide future posts on the details of a landing zone, such as AWS Control Tower, custom, third-party, and provide some lessons learned.

Monday, January 05, 2015

Shifting IT to scale

While some IT groups are fine tuning their practices, many others are still on the journey to achieve greater efficiency at scale. Not all can make this shift (typically due to size, and budget) and that may be fine for them, but there are the unfortunate few who have not yet realized the need to shift their IT practices. Integration, deployment, and operations have been going through a huge transition, in large part due to business needs to better support more customers globally, provide better availability and response times; all at low costs (how else can you maximize profits?). A simple phrase for this is “cloud enablement”. Yeah, that catch all phrase that hopefully you’ve come to realize you need to be a part of, or risk failing behind. To achieve this, a fully automated deployment pipeline is a necessary component which requires a few things be put in place, namely:
  • Thorough application monitoring
  • A Collaborative culture
  • Developer virtual machines
  • One-click integrations
  • Continuous integration
To support many daily deployments, the development process should revolve around making many small, continuous changes, while keeping risk to a minimum. To be comfortable with at all times, you'll need to adopt a range of tools and practices.


Making Developers Comfortable
One of the best ways to ensure a developer can be comfortable with making any deployment is to ensure each developer has their own full production stack. Every developer should have their own virtual machine (there are free options like Virtual Box, my preferred, KVM and Xen), configured with a configuration management tool such as Puppet, Chef, Ansible, Salt or whatever you use internally, with the same configuration used in production. Ensuring the whole provisioning process is automated is another necessity since this not only optimizes productivity by minimizing creation times, but eliminates human error in typing in multiple commands which may vary according to environment and platforms.

On the continuous integration front, having a tool which allows developers to test their changes without having to commit to the production code repository helps to keep production clean and thus deployable while allowing quick and reliable testing. With the rising increase in popularity of container technologies (looking at you Docker), one can spin up on-demand, isolated and parallelized containers to conduct separate tests. The deployment process then becomes a simple one-click process between environments. A/B or other such zero-downtime type production testing further adds to the comfort level, not just for developers but the business in general.

Just as important to doing continuous delivery is monitoring. KPIs (key performance indicators) should be well known and graphed (I like graphing everything that can graphed as being a visual person I find it tells the story much quicker and is more effective). Most monitoring solutions now provide anomaly pattern detection which can be quite useful vs. eyeballing some numbers or even a graph. Your typically better off with a hybrid log approach where each application/service sends logs to two locations, i.e. it’s own log aggregator service which provides short term storage, and insight into it’s local activities without any external depedencies; and a centralized log aggregator service providing longer term storage with the end-to-end insight of all services and clients.


Achieving a Collaborative Culture
Any highly ambitious or for that matter successful endeavor requires a high degree of collaboration, ongoing collaboration to be exact. Most high performers by their very nature are social and want to talk, to share, and be collaborative. One only needs to look at the success of Twitter, Facebook and other social media platforms to see this truth. Enabling that collaboration with the appropriate solution and practices are the only seeds required to make this desire grow and succeed. A highly collaborative communication style, which I like, is based on IRC with chat rooms or channels for various specific purposes. For example, each team can have it’s own room/channel for private communication, another room/channel for a specific service/application, and yet another for general discussion or  perhaps “war room” (such as #warroom for outage related conversations to coordinate an investigation, discuss counter measures and resolution monitoring). Many such solutions are available in the market offering a full breadth of features such as email and ticket integration, video conferencing, white boarding and so on.

Part of a collaborative culture also involves doing a post-mortem, lessons learned or root cause analysis following an incident. I really like the idea of making this blameless as, I think, this gets things done more effectively. Typically everyone already knows whose at fault (or the team) and assigning blame in a public manner only serves to decrease moral, job satisfaction and actually learning more about what and how something happened. Finger pointing is never productive in my experience.


A final word on on-call
I don’t like being on call, as in I don’t like being called at 3am out of my sleep or having to sit by the phone, I’m sure no one actually does, but it is a necessary process for operations, support and developers. Being on-call not only makes you want to have things working so you don’t get called, but also ensures you stay in touch of the day-to-day issues that are faced. This is especially important when introducing new features or improving existing processes. I like going with a rotation schedule of one-week every four weeks. I think this quite typically and agreeable to most.

Sunday, September 28, 2014

Installing Tomcat 8.0.x on OS X


Prerequisite: Java

On OS X 10.9.x (Mavericks) Java is not installed by default anymore, at least not initially.The easiest way to get Java on your Mac is to open the Terminal app and type ‘java’. You will be asked if you want to install Java and OS X will take care of the rest - you just need to follow the instructions and you’ll end up with Java 7. This involves sending you to the Oracle’s Java SE web page where you will need to select the appropriate JDK (JDK 7u67 as of this writing) for download and installation.

The JDK installation package comes in a dmg and installs easily on a Mac. In the same or different Terminal app window entering:

java -version

Will now show something like this:

java version "1.7.0_67"
Java(TM) SE Runtime Environment (build 1.7.0_67-b01)
Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode)


Confirming you have successfully installed Java 7.


Installing Tomcat

Now comes the Apache Tomcat installation, which is actually quite easy.

1. Download a binary distribution of the core module (apache-tomcat-8.0.12.tar.gz).

2. Using any available unarchive tool, unarchive the file from the Downloads folder to ‘/usr/local’. You likely will need Administrative privileges for this step. You can unarchive the file contents to another location of our choosing, I chose ‘/usr/local’ since it’s my standard and will centralize Tomcat.

3. To make things easier in future for transparent release updates, create a symbolic link for your location (‘/usr/local/apache-tomcat-8.0.12’) to the library location:

sudo ln -s /usr/local/apache-tomcat-8.0.12 /Library/Tomcat

4. Permissions and mode should be okay (it was for me), but to make sure you can run the below:

sudo chown -R /Library/Tomcat
sudo chmod +x /Library/Tomcat/bin/*.sh


5. Startup your Tomcat instance:

/Library/Tomcat/bin/startup.sh

6. Verify things are working by opening a browser window/tab using the default URL (http://localhost:8080) and take a look at the default page.

Everything should be functional and now ready for your customizations and/or deployments.

Wednesday, May 28, 2014

How to resolve "vagrant up" failing with "VBoxManage.exe: error: Code CO_E_SERVER_EXEC_FAILURE"

I'm trying to use a pre-built vagrant box, one built by Mathew Baldwin for WLS12c on CentOS 6.4, and ran into a problem. The vagrant up command failed:


A few things:

  • Running the VirtualBox command "vboxmanage list hostonlyifs" separately is fine.
  • I'm not running the command or session as an Administrator, in fact that fails with a completely different error given that's a separate account and does not have the required vagrant box.
  • I'm using the awesome Console2 (by Marko Bozikovic) but that matters not since the same error occurs in plain old cmd.exe
  • The command does complete successfully in MobaXterm 7.1 (another awesome tool!), though later down during the processing of the box (during Puppet configuration I believe) my machine does a hard shutdown/crash
  • VirtualBox version is 4.3.12
  • Vagrant version is 1.6.2
  • Windows 7 Pro SP1, 64-bit 

A Google search revealed others have had this problem, but not too many actual solutions, or at least one that worked for me. Here's how I got past this problem outside of using my MobaXterm (given that had shutdown my machine each time previously).

The fix was to start VirtualBox before running vagrant up. Even still, there remains some instability due to what seems to me like timing issues with vagrant sending commands to VirtualBox as on occasion the process fails (i.e. times out) waiting for the VM to start up and show the login prompt.

Tuesday, October 02, 2012

Oracle OpenWorld 2012: Monday

Database 12c Features

Following yesterday's big announcements and new rumors I made a few schedule changes ensuring I allotted more time in the demo grounds to talk to various Oracle specialists concerning Database 12c. The demo grounds are truly amazing with a wealth of contacts to be made and things to be learnt from talking to the various "informed" vendors (including Oracle). Even more amazing are the large number of people that come solely for the purpose of winning free stuff - not that I'd complain if I won one of the numerous iPads on offer or even better the $10,000 offered by EMC (good to know where my companies money is going). I'm curious if any of these vendors do any analysis on the "real" contacts made vs. just those looking for stuff and follow-up sales made.

There are many new and usable features with 12c, and I would argue this will be the biggest release Oracle will have done to date (when it ships) in terms of changes and features. The upgrade process via DBUA has been given some attention with parallelism during the upgrade itself, fix-it scripts, resumption from some failures (instead of starting from scratch for everything) and a post-upgrade health check. Transportable Tablespaces (TTS) via Data Pump will be more efficient by automatically running all pre-requisite checks, and doing the full export and import. Meaning it figures out all the metadata dependencies, creates all the users, objects, grants, etc. from the source unto the target then copies across all the data and viola!

Some questions (but not all) surrounding Pluggable Databases (PDB), which I've mentioned yesterday, were  answered today as well. It will support pre-12.1 databases (only 11.2.0.3 is my guess and based on the slides used) which will plug into a 12.1 container database (the housing or hypervisor database if you will). All databases can be backed up as one and recovered separately including point-in-time (PIT). PDB can also be run in standby setups though I'm still left wondering how PDB works exactly in a RAC environment. Migration into this architecture appears to be done via Data Pump (I'm guessing TTS since otherwise it would be long migration). Resource utilization is handled by Resource Manager (DBRM) so processor (and other settings?) usage can be allocated to each database. Patching/upgrades can be done separately to each database though I'd imagine the container DB must always be at the highest release (similar to Grid Infrastructure). A question, out of many I have, is how does affect Oracle VPD and will it be a paid feature? (we all know the answer is yes)

Another very interesting and immediately usable feature is "Automatic Usage Based Compression". Essentially a heat map of table partitions is used to compress various partitions based on usage/activity (INSERT, UPDATE and DELETE statements) using some user defined policies. Compression is done online, in the background. Does this mean HCC is open to all now? Is this using DBMS_REDEFINITION under the covers for the online compression change? What about compressing table blocks and not just partitions? Will the threshold for hot, warm and cold be adjustable (there is always some hidden parameter)? Is this part of the compression package/option and how much will it cost?

Redaction of Sensitive Data is another big feature. This moves the masking of sensitive data from the application level to the database level where the DBA does this change online and immediately (no logoff/logon required) based on set policies. I'm left wondering how this affects the Data Masking Pack and Label Security? (and again, how much?)

A new feature (which is actually available now supported for Exadata) is RMAN Cross Platform  Incremental Backups. This is using RMAN to do a platform conversion from a big endian platform such as Solaris SPARC, IBM AIX and HP-UX to Linux using backups/restores where the incrementals can be applied to the target until ready to switch platforms at which point the actual switch over is considerably less time (and effort). Note 1389592.1 explains this with greater details.

The Jimmy Cliff Experience

Whomever thought up the Oracle Music Festival deserves a raise, and even better, the party responsible for bringing in Jimmy Cliff deserves a promotion! The man, Jimmy Cliff, must be in his 60's at least but has more energy than a 20 year old! He completely rocked the house with his energy, song arrangements and charisma. The crowd (including myself) was completely involved and so much so enjoyed his performance that an encore was demanded, and graciously accepted.

I had another big day ahead of me and so left during his encore. I can already feel the pain in various body parts following my "dancing" (I use the term loosely). I good end to another great day at OpenWorld...