Storageless Data – agility at its finest

Detach your data from bare-metal machines with a global file system from Hammerspace.


Storageless Data – agility at its finest

Detach your data from bare-metal machines with a global file system from Hammerspace.

We should write the word agility everywhere to remind ourselves how crucial it is for businesses these days. It is tough to build a big and powerful organization; it takes time to find people you trust with the skillset, knowledge, and experience that you need. But even if you do it, constant market changes force you to react fast to succeed! The road to success is built on a good team and a fast, precise reaction.
IT can always help with the speed of things if the approach is correct. It can be a jet engine speeding up your business if you choose the right way to attach it. With the right approach, IT can deliver the agility that your business needs to answer the market’s demands. We always think what the one thing that slows us down the most is. For data, the machine is the culprit that holds data down, making it attached to the bare-metal storage machine, trapping it on the specific location. Moving the from If we could imagine data without storage, detach it from bare-metal devices, we could imagine sky-high speed and agility for your business.


Of course, we are far from having data stored on anything other than bare-metal storage drives, but a new term, storageless data, is something the industry has recently started to use. There is a lot of ambiguity in the term; some consider it a marketing hype while others consider it a real thing. The contradiction in terms originates from the inability of the data to exist outside of the storage. So the question is: what is storageless data then?
Using the term storageless doesn’t mean that your data doesn’t live on storage. Ultimately, compute must run on servers, and data must live on a bare-metal storage device. As far as serverless computing goes, the term’s contradiction emphasizes that we do not need to care about the mapping-to-server if we want to get our jobs done. In the same way, the term storageless data is a slap in a face contradiction that tells that even if I need to work with my data, I don’t want to think about the underlying storage infrastructure. It is a consumer-centric approach because it puts in front the perspective of the users who works with the data, rather than the perspective of the IT operative who works with infrastructure. With Hammerspace, you will get your job done without the need to think about mapping to storage and the underlying infrastructure. Essentially, this concept can be described as data as a service.

Of course, there is a lot more to Hammerspace than ultimate data agility. It overcomes the siloed nature of hybrid cloud storage and delivers global data access, enables the use of any storage in hybrid cloud infrastructures, and is built for high performance. Hammerspace is a global file system that grants you access to your data from any cloud and across any infrastructure. It serves, manages, and protects data on any infrastructure… anywhere. Ultimately, Hammerspace is modernizing data workflows so they can move from IT-centric to business-centric data.

The HAMMERSPACE advantage

1. Cost profiling

Hammerspace platform constantly monitors available storage infrastructure and data behavior to predict the cost of different tiering scenarios, whether on-premises or in the cloud. It provides you with valuable and accurate information that allows you to concentrate on your infrastructure costs, allowing you to make informed business decisions.


2. Business objectives oriented automation

Now it is possible to teach your infrastructure about the nature of your business. With Hammerspace, you can define your business objectives, and the software will create extensible user-defined metadata that will help the machine learning mechanism to tier and automate the data across storage, sites, and clouds in the best way for your business.


3. Multi-site, multi-cloud

Universal global namespace, virtualized and replicated at a file-level granularity, enables active-active data accessibility across all sites. It allows access to your data from any cloud and across any infrastructure.


4. Data management at a file-level

Hammerspace enables you to manage your data to the level of a particular file. But the real benefit of this technology is that file granular data management is the only way to efficiently scale across complex mixed infrastructure without making unnecessary copies of entire data-volumes.


5. No disruptions

There are two aspects of the non-disruptiveness of Hammerspace. Zero-downtime assimilation that quickly brings data online and live data mobility technology that eliminates migration disruptions. Those two technologies combined make your data highly accessible and secure at any time.


6. High performance

Hammerspace delivers high-performance across hybrid clouds by simplifying performance and capacity planning. The planning is simplified through parallel and scale-out filesystem with direct data access.


Aside from mentioned features and advantages, Hammerspace offers a long list of useful features:

1. Native Kubernetes Support
2. Share-level snapshots
3. Undelete
4. Data replication
5. Real-time analytics
6. Support for NFS, SMB, and S3
7. Global dedupe & compression
8. WORM data-lock
9. Data-in-place assimilation
10. Data virtualization
11. Programmable REST API
12. Kubernetes CSI driver
13. Third-party KMS integration, etc.,…

We recommend you to check out this awesome video of Hammerspce CEO David Flynn explaining the concept of storageless data. Braineeing engineers are happy to work directly with Hammersapce to ensure that the infrastructure you receive is custom-tailored for your business. If you have any additional questions, want our engineers to estimate the state of your infrastructure and its compatibility with Hammerspace technology, or just want to chat, feel free to call us!


Before you continue...
Subscribe to our monthly content digest and stay up-to-date on everything industry related!

SSL Decryption: Hidden Threats no More

Why decrypt network traffic?


SSL Decryption: Hidden Threats no More

Why decrypt network traffic?

More and more traffic is being encrypted. For this reason, for the firewall to inspect all traffic, it must also decrypt it due to high visibility, control, and protection at the highest levels.

SSL uses symmetric and asymmetric encryption.

  1. The client requires an SSL connection
  2. The server responds to the request by sending its certificate containing its identity and public key
  3. The client checks the certificate by looking for it in the list of known certificates. (PKI)
  4. The client generates a random symmetric key and encrypts it using the server’s public key.
  5. The server uses its private key to decrypt the session key (from step 4).

Types of decryption on Palo Alto Firewall

Palo Alto allows 3 types of decryption:

o SSL Forward Proxy

o SSL Inbound Inspection

o SSL Decryption


SSL Forward Proxy


SSL Forward Proxy decrypts SSL traffic between a host on your network and a server on the Internet. In this scenario, Palo Alto acts as an SSL Proxy that establishes a connection between your host and Palo Alto and separates (but logically related) communication between Palo Alto and the server on the Internet.

An example in which this type of encryption is helpful is when you allow employees in your network to read the Twitter line in your policies in Palo Alto. Still, you want to prohibit them from sending messages and posting content (tweeting). If SSL decryption is enabled, Palo Alto will easily distinguish within the policy whether Twitter traffic belongs to “reading,” “commenting,” or “chatting” and, based on that, defend or allow traffic. If encryption is not enabled, Palo Alto cannot know what type of application is within the SSL connection.


SSL Inbound Inspection


SSL Inbound Inspection decrypts traffic coming from external users to your internal services. For this decryption, you must have a server private key and certificate. In this scenario, Palo Alto does not act as a proxy but directly forwards the request to the internal server. For that reason, you need the previously mentioned certificates because the connection is formed directly between the host and the server. Security policies are accepted for traffic so that the firewall can block or allow traffic between the host on the public network and your internal server.


SSH Decryption


SSH Decryption is used to detect and decrypt incoming and outgoing SSH traffic. If an SSH tunnel is detected, the SSH connection is blocked not to be used to tunnel unauthorized applications and other content.




A proxy is an intermediary in communication between a client and a server. The proxy takes the package from the client and recreates it again towards the client. In the case of encrypted communication, a proxy receives an SSL request from a client and instead sends it to a server on the Internet. The same response is downloaded from the proxy server and sent back to the client in the reverse direction. It provides encrypted communication between the client and the proxy and the proxy and server on the Internet.


PKI (Public Key Infrastructure)


PKI solves the problems of secure identification of the public key, using a digital certificate to verify the public key owner. You can see the components of PKI in the picture below:




PKI is a set of hardware, software, policies, and standards used to create, manage, distribute a certificate. All this is necessary for the certificate holder to publicly confirm that he/she is what he/she pretends to be.

Root CA (Certified Authority) is the supreme Authority and provides services that authenticate devices, services, people by issuing a certificate that confirms their identity and public key.

The Root CA authorizes the Intermediate CA to certify or authorize other lower CAs. Each CA has a database with active certificates, revoked certificates, issued certificates, etc.

On the side of end devices, the devices are themselves that keep the certificates issued to them and the private key. Users refresh the certificate database themselves or with the help of software. If CA’s certificate is not in the user’s database, it will receive an unreliable warning message about the site they are visiting.


Configuring decryption

  1. Create 2 self-signed certificates (trusted and untrusted)
  2. Export the trusted certificate and import it to the end-user (computer)
  3. Create a decryption profile (optional)
  4. Create a decryption policy
  5. Check if the traffic is decrypted


  1. Creating self-signed certificates

Device> Certificate Management> Certificates> Generate


The image below shows the steps to create a self-signed trusted certificate



The image below shows the steps to create a self-signed untrusted certificate



  1. Export of self-signed trusted certificate and installation on a client

The image below shows the steps to export a self-signed trusted certificate



The image below shows the steps to install a self-signed trusted certificate on a client









  1. Create a decryption profile

Object> Decryption> Decryption Profile> Add

The image below shows the steps to create a decryption profile. The decryption profile serves to let the firewall know how to treat the traffic being decrypted.



  1. Creating a decryption policy

Policies> Decryption> Add

The image below shows the steps for creating a decryption policy. The policy determines which traffic will be decrypted. The name, source and destination zone, and traffic action (No Decrypt or Decrypt) are defined.




Defining the source of the traffic to be decrypted




Defining the destination to which the decrypted traffic is going



Defining the type of services.



Defining the action to be applied to the decrypted traffic as well as the type of decryption and the decryption profile.


Find and check decrypted traffic in the log section.


Before you continue...
Subscribe to our monthly content digest and stay up-to-date on everything industry related!

Cost of having it easier – is it worth it?

Microsoft Local Administrator Password Solution - LAPS


Cost of having it easier – is it worth it?

Microsoft Local Administrator Password Solution - LAPS

Recognizing the problem

In most companies, there is a serious flaw that could compromise IT infrastructure easier than you think…

When you install a fresh copy of Windows 10, you probably noticed that the local Administrator account is disabled by default. Do you know why it is like that?
Maybe someone gave you the task to manage passwords for local accounts on domain computers using GPO, but you noticed that option is greyed out. Is it disabled by purpose? For the same reason?

Yes. It is.


People with bad intentions (at least for you), let us call them Hackers, found a significant flaw in the GPO password distribution process. Although Windows encrypted the distributed password, Microsoft released the encryption key in their technical documentation, so finding out the password was a child’s play.
Microsoft recognized this huge vulnerability and decided to disable the option to set a password in GPO, but if you already have and use this policy, it still works. You are just unable to change it anymore…

But why did Microsoft disabled the built-in administrator account by default?
Although the built-in administrator is disabled by default, it is still widely used in IT organizations.
How does this happen, and why?

Every computer in your company needs a local administrator account.
This local administrator account is mostly irrelevant to managing workstations and servers within an enterprise environment, because the right thing to do (which you probably already did) in the enterprise environment is to set up domain-level groups with restricted privileges under the local Administrators group, but… If your computer does not connect to any domain controller (for numerous reasons), or maybe your network is down, a local administrator account is necessary.
In that case, it will be used by administrators or technicians from the Help desk to access workstations and servers and resolve any issues.

Since administrators and Help desk technicians need to maintain many computers, it is easier and more convenient for them to have the same account and password on every one of them.
Also, most companies deploy Windows client OS on workstations for the first time as a copy of the Windows 10 image. Administrators often re-enable local administrators in Windows 10 because they need local access to their computers in the event of a disaster. If they configure their Windows 10 computers with an image where the local administrator account is enabled, then every computer provisioned via that image will have the same credentials.

In short, one password can be used to access every workstation.
For convenience and ease of use (more often than not), they have been given easily guessable passwords, and sometimes, that same password is used across large parts of the domain.

The fact that every computer in your network can be accessed with the same username and password is discoverable to a malicious user or attacker.
Using their local SAM account or tools such as mimikatz or impacket, an attacker or malicious user could discover their own local administrator password. Since many attackers know about this potential vulnerability, it will not take them long to try the password on every other computer in your network.
Infecting and compromising only one workstation, stealing that unique password is all they need to access everything. I mean EVERYTHING!

As things start to happen in front of your eyes, and you see the damage being done, you realize the true cost of having it easy…


The nightmare unfolds


That attack looks something like this:

  1. Attacker targets workstations simultaneously, all at once.
  2. User running as local admin compromised, attacker harvests credentials.
  3. Attacker starts “credentials crabwalk.”
  4. Attacker finds host with domain privileged credentials, steals them, and elevates privileges.
  5. Attacker owns the network. Now he can harvest whatever he wants.
  6. You realize that you can’t wake up from this nightmare…



Even if the local administrator is disabled for network operations and local administrators can be used only when accessing the local machine directly, it is still a significant security risk. Attackers will use this vulnerability for privilege escalation. They simply log in with any other account remotely and then “run as” the local administrator to gain administrative privileges.


Preventing this from happening. How hard can it be?


Keeping the local administrator account secure requires a lot of maintenance. It doesn’t matter which approach you take.
The system administrators must ensure that passwords are unique on every computer object, not to be too simple, obvious, or reused anywhere else. It also needs to be applied after deploying machines from a template or when you deploy them manually.
Backsliding is very easy and likely to happen.
Reusing the same passwords is so much easier from a maintenance perspective that people forget about the security angle.
Discovering the problematic computers is an issue on its own… If you have hundreds or thousands of computer objects in your organization, some might have weak, reused passwords, and some don’t. It is most likely the case since machines are probably deployed over a significant period in batches and by different people.


Here is how to avoid it.


The obvious way to avoid problems with the local administrator is to disable it entirely, but that leads you to other issues, in cases such as when the network becomes unavailable.

You could also create a unique password for every workstation/server. It may be a sisyphean task, especially if you have hundreds or thousands of computers.

Fortunately, Microsoft released a tool that will create random local administrator passwords using the GPO and then store them encrypted in Active Directory.
Hackers and malicious users can still steal credentials, but it would take a lot, A LOT more work to compromise workstations or servers.


Having it easy, but at the same time secure? It’s possible!


Here is an overview of the solution:



    • Local admin passwords are stored encrypted in Active Directory.
    • Computer accounts can only write passwords. They cannot read them (in case it gets compromised).
    • The solution addresses only local accounts.
    • Single installer contains:
      • Group Policy Client Side Extension (CSE)
      • ADM/ADMX templates for GPO management
      • Powershell module
      • Password Decryption Service (PDS)
      • Fat client UI for reading/resetting passwords

The core of the LAPS solution is a GPO client-side extension (CSE) that performs the following tasks and can enforce the following actions during a GPO update:

    • Checks whether the password of the local Administrator account has expired.
    • Generates a new password when the old password is expired or required to be changed before expiration.
    • Validates the new password against the password policy.
    • Reports the password to Active Directory, storing it with a confidential attribute with the computer account in Active Directory.
    • Reports the next expiration time for the password to Active Directory, storing it with an attribute with the computer account in Active Directory.
    • Changes the password of the Administrator account.

The password then can be read from Active Directory by users who are allowed to do so. Eligible users can request a password change for a computer.




  • On dedicated LAPS management server, install all available options presented in the install file:
    • GPO extension, so computer objects could be managed by LAPS.
    • Fat client UI to view passwords.
    • Powershell module for completing and configuring the solution.
    • GPO templates for configuring LAPS settings in AD GPO.
    • PDS service.
AD preparation

Extending AD schema is required since the three new attributes are added:

    • ms-MCS-AdmPwd (stores the built-in local administrator password in an encrypted way)
    • ms-MCS-AdmPwdExpirationTime (stores the time to reset the password)
    • ms-MCS-AdmPwdHistory (needed in case of restoring a backup)

It is executed with two commands (must be Schema Admin to run them) and my advice to you is to test AD replication and make sure it works without any issues before executing commands:

Import-module AdmPwd.PS – imports the modules for LAPS management.
Update-AdmPwdADSchema – extend the schema for the new attributes.


Delegation of permissions

Three roles need to be implemented:

Password Decryption Service role – has permission to interact with AD directly
Password Reader role – has permission to read admin passwords via PDS
Password Reset role – has permission to reset admin passwords via PDS

The best practice is to implement those roles by AD groups, so three types of groups need to be created and membership populated.


Service account permissions:

Managed machines access Active Directory using special well-known account SELF, so necessary permissions must be added to the SELF well-known account. It is required to update the password and expiration timestamp of its own built-in Administrator password on its own computer accounts in AD.
PowerShell cmdlet for granting permissions to the Password decryption service to read and write information to Active Directory:
Set-AdmPwdServiceAccountPermission –Identity <name of the OU to delegate permissions> -AllowedPrincipals <name of Password Decryption Group>



Managed machines access Active Directory using special well-known account SELF, so necessary permissions must be added to the SELF well-known account. It is required so the machine can update the password and expiration timestamp of its own built-in Administrator password on its own computer accounts in AD.
Set-AdmPwdComputerSelfPermission -Identity <name of the OU to delegate permissions>


User Rights to read passwords:

Add the extended permission Read Local Admin Password to the group that will be allowed to read the local administrator’s password for managed computers. It is done using PowerShell.  You may need to run Import-module AdmPwd.PS if this is a new window.
Set-AdmPwdReadPasswordPermission -Identity <name of the OU to delegate permissions> -AllowedPrincipals <name of Password Readers Group>


User Rights to reset passwords:

Add the extended permission Reset Local Admin Password to the group that will be allowed to reset the local admin account’s password for managed computers.  It is done using PowerShell.  You may need to run Import-module AdmPwd.PS if this is a new window.
-Identity <name of the OU to delegate permissions> -AllowedPrincipals <name of Password Resetters Group>


PDS Server installation


PDS is responsible for creating and maintaining key pairs used for password encryption and decryption, communication with the Active Directory, auditing of requests of users for password reads/resets, and registration/maintenance of DNS SRV record used for discovery of service by clients.
Password Decryption Service is thus handling some interactions with AD infrastructure.
Computers do not directly read from Active Directory; they only directly write. Password Decryption Service maintains the decryption keys and is responsible for password reads, decrypts, and password resets.


Key pair

To store encrypted passwords in Active Directory, a key pair must be created. For security, only the Key Admin role can generate a new key pair. (By default, the Key Admin role is defined as an Enterprise Admin).

Keypairs are generated using PowerShell.  Upon request to create keys, two files will be made in the configured location:

One file contains a public key, and it should be distributed to managed machines via GPO
One file contains a private key and is used by the Password Decryption machine(s)

New-AdmPwdKeyPair -KeySize <Keysize of 1024, 2048 or 4096>

Keys are stored in C:\Program Files\AdmPwd\Svc\CryptoKeyStorage.


Group policy settings


Group Policy is used to enable the local admin password solution and to configure various settings.
For GPO maintenance, the ADMX template needs to be installed on the machine on which Group Policy Management Console (GPMC) is running.
In GPO UI, all configuration settings related to CSE configuration are located under “Computer configuration/Policies/Administrative Templates/AdmPwd/Managed Clients” path.

Following configuration values are supported:






The Password Decryption Service (PDS) logs its activity into a dedicated Windows Event log.
Auditing for users who query for a computer’s local administrator password can be accomplished by reviewing the LAPS Service Event log located under Applications and Services Logs.


Client installation


Installing CSE does not initiate the management of the password. GPO settings must be enabled for password management.
For the GPO to be applied on Servers or Workstations and LAPS solution to work, each managed computer should be installed LAPS client.
These can be installed/updated/uninstalled on clients using various methods, including the software Installation feature of Group Policy, SCCM, login script, manual install, etc.


Using LAPS to retrieve password


To retrieve a computer object’s password, all you need to do is install “Fat client UI” on workstation you use to administer your infrastructure and select the computer object for which you need the local admin password retrieved.




As always, recognizing the problem is the first step in resolving it.
Ignoring it could have enormous, potentially unrecoverable consequences.

Security in IT is always about compromises. Searching for that thin line that separates ease of use, comfort, and improved security.

Luckily, Microsoft created the solution for a scenario described in this post, which favors ease of use in large organizations while equally improving its security.

People with bad intentions won’t be able to take advantage of you… At least, not that easily…



Before you continue...
Subscribe to our monthly content digest and stay up-to-date on everything industry related!