Vulnerability-Assessment-patching-Semi-Automation

Download / Git clone the script from the below URL,

https://github.com/vinothkumarselvaraj/vulnerability-Assessment-patching-Semi-Automation.git

vulnerability_Assessment_patch_pkg_render_v2.py:

This python script will give you the package name list that needs to be updated on the server for patching the server group. The list is obtained by rendering the VA report provided by the infosec team by grouping the column “plugin_text” based on the node_type and then filter the package by removing the duplication.

vulnerability_Assessment_USN_action_v2.py:

This script will give you the ACTION items (Restart or Reboot or Standard system update) required for the given USN number in the document.

Prerequisites for an excel input

1) Add the column “Hostname” in the VA sheet to match the “IP address” column provided in the report.

2) Delete a formula and keep only the value from the cells if any formula used, as this python script will read the cell on its value.

Ref Link:- https://support.office.com/en-us/article/delete-or-remove-a-formula-193dbbed-6fcf-4f07-9119-5acff81b89c5

3) Make sure your workstation has network connectivity to reach https://usn.ubuntu.com/, as the script will do curl call for each USN number given.

4) Your workstation should have python module “openpyxl” installed. [pip install openpyxl]

Script Usage:

$python vulnerability_Assessment_patch_pkg_render.py <node_type>

where node_type is one or more of “‘bmk’, ‘cadf’, ‘ceph’, ‘cfg’, ‘consul’, ‘cpu’, ‘ctl’, ‘des’, ‘dns’, ‘jmp’, ‘kvm’, ‘nal’, ‘ntp’, ‘ntw’, ‘prx’, ‘rmq’, ‘sql’ or ‘all'”

Output sample:

[email protected]:~/automation_final# ll
total 652
drwxr-xr-x  2 root root   4096 Apr 16 17:01 ./
drwx------ 12 root root   4096 Apr 16 17:01 ../
-rw-r--r--  1 root root 647232 Apr 16 15:15 JPE1_BAVA_full.xlsx
-rw-r--r--  1 root root   3076 Apr 16 16:44 vulnerability_Assessment_patch_pkg_render.py
-rw-r--r--  1 root root   2068 Apr 16 16:32 vulnerability_Assessment_USN_action_v2.py
[email protected]:~/automation_final# python vulnerability_Assessment_patch_pkg_render.py all

                        =========IMPORTANT==========

1)Add the column "Hostname" in the VA sheet to match the "IP address" column provided in the report.
2) Delete a formula and keep only the value from the cells if any formula used, as this python script will read the cell on its value.
 Ref Link:- https://support.office.com/en-us/article/delete-or-remove-a-formula-193dbbed-6fcf-4f07-9119-5acff81b89c5

                        ===========================

Enter the Excel Document name with full path: JPE1_BAVA_full.xlsx
The below sheets are available in the excel document
[u'BA', u'VA', u'Sheet2', u'MA', u'vjpe1', u'filtered_package', u'Sheet3', u'USN']
Enter the sheet name to load data from: VA
Enter the Column name of hostname entries: F
Enter the Column name of Plugin_text entries: G

===== Execution of vulnerability_Assessment_patch_pkg_render.py will generate a xlsx file with name syntax [VA_Report_with_pkg_filtered__<current_time>] which needs to be given as an input file for vulnerability_Assessment_USN_action_v2.py ============

-rw-r--r--  1 root root 1942718 Apr 16 17:47 VA_Report_with_pkg_filtered__20190416-174728.xlsx
-rw-r--r--  1 root root    3076 Apr 16 16:44 vulnerability_Assessment_patch_pkg_render.py
-rw-r--r--  1 root root    2068 Apr 16 16:32 vulnerability_Assessment_USN_action_v2.py
[email protected]:~/automation_final# python vulnerability_Assessment_USN_action_v2.py

                        =========IMPORTANT==========

 Make sure your workstation has network connectivity to reach https://usn.ubuntu.com/, as the script will do curl call for each USN number given

                        ===========================

Enter the Excel Document name with full path: VA_Report_with_pkg_filtered__20190416-174728.xlsx
The below sheets are available in the excel document

[u'BA', u'VA', u'Sheet2', u'MA', u'vjpe1', u'filtered_package', u'Sheet3', u'USN']
Enter the sheet name to load data from: VA
Enter the Column name of USN entries: K

==== On successful execution, the script will generate a new xlsx file with the name syntax [VA_Report_with_USN_Action_<Current_time>] which is your final VA report.========

Python Script to list the OpenStack Orphaned resource


Whenever we hard delete any Project without releasing its resources allocated to that project, then the resources mapped to the unknown (Non-exist) tenant will turn as Orphaned resources.

For example, Deleting the project without deleting the VMs / Networks / Routers / Floating IPs will make those resources allocated to non-existing project. Those resources will stay in our cloud environment forever and occupying the resources space as well.

This script will list the resources that are allocated to non-existing Project IDs.

$ git clone https://github.com/vinothkumarselvaraj/openstack-orphaned-resource.git

$ cd openstack-orphaned-resource

$ python openstack_orphaned_resource.py <--object-->

where object is one or more of “networks’, ‘routers’, ‘subnets’, ‘floatingips’, ‘ports’, ‘servers’, ‘secgroup’ or ‘all'”

Pass-through OpenLDAP Authentication (Using SASL) to Active Directory on Centos

The idea is to ask OpenLDAP to delegate the authentication using the SASL protocol. Then the saslauth daemon performs the authentication on the Active Directory server using the LDAP protocol.

Before we begin, let’s ensure we are good with the terminology used in this document and its definition.

LDAP vs Active Directory vs OpenLDAP?

OpenLDAP – OpenLDAP is a free, open-source implementation of the Lightweight Directory Access Protocol (LDAP) developed by the OpenLDAP Project. It is released under its own BSD-style license called the OpenLDAP Public License.

Active Directory is a database based system that provides authentication, directory, policy, and other services in a Windows environment

LDAP (Lightweight Directory Access Protocol) is an application protocol for querying and modifying items in directory service providers like Active Directory, OpenLDAP, which supports a form of LDAP.

In Precise:
– AD is a directory services database in a Windows environment.
– OpenLDAP is again a directory services database in a Linux environment.
– LDAP is one of the protocols you can use to talk to it.

SASL
Simple Authentication and Security Layer (SASL) is a framework for authentication and data security in Internet protocols. It decouples authentication mechanisms from application protocols, allowing any authentication mechanism supported by SASL to be used in any application protocol that uses SASL. Authentication mechanisms can also support proxy authorization, a facility allowing one user to assume the identity of another.

Pass-Through authentication is a mechanism used by some LDAP directories to delegate authentication operations (BIND) to other backbends.

Pass-Through authentication is purely transparent for LDAP clients, as they send standard authentication operations to the LDAP directory, which will then handle the delegation and forward the response to the client, as the authentication was done locally.

Fig: 1.1 – Password is stored in a AD and  OpenLDAP directories delegate authentication to it.

In Our use case, we will be adding the actual user profile in our locally installed (on CentOS 7) OpenLDAP server without any passwords. Then we will be configuring a pass-through authentication between OpenLDAP and AD using saslauth demon. So that whenever an authentication request sent to OpenLDAP server, it will ask the Active Directory to validate the password stored in its database.

This documentation assumes that you already know about configuring OpenLDAP and Active Directory.

Ref:- To Install and configure OpenLDAP on CentOS – https://www.itzgeek.com/how-tos/linux/centos-how-tos/step-step-openldap-server-configuration-centos-7-rhel-7.html

Step 1: connection to the backend

You need to get all connection parameters to the authentication backend. An example with Active Directory:

  • Server address: ldap://ad.hellovinoth.com (or) ldap://10.14.48.48
  • Bind DN: CN=Administrator,CN=Users,DC=hellovinoth,DC=com
  • Bind Password: ADpassword
  • Users branch: CN=DomainUsers,DC=hellovinoth,DC=com

For our environment, we can check these settings with an ldapsearch:

ldapsearch -x -LLL -H ldap://10.14.48.48 -D "IN\cloud.ADM" -w '[email protected]' -b "DC=in,DC=hellovinoth,DC=com" "(&(objectclass=user)([email protected]))"

The output we will be getting in response confirms the successful connection establishment with our AD.

Step 2: Define the LDAP access parameters

Add below entries in /etc/saslauthd.conf:

 ldap_servers: ldap://10.14.48.48
ldap_search_base: DC=in,DC=hellovinoth,DC=com
ldap_timeout: 10
ldap_filter: sAMAccountName=%U
ldap_bind_dn: IN\cloud.ADM
ldap_password: [email protected]
ldap_deref: never
ldap_restart: yes
ldap_scope: sub
ldap_use_sasl: no
ldap_start_tls: no
ldap_version: 3
ldap_auth_method: bind

Step 3: Saslauthd setup

Install the cyrus SASL daemon and its LDAP plugin:

# yum install cyrus-sasl
cyrus-sasl-ldap

check wheather your SASL daemon supports LDAP:

# saslauthd -v

If not, reinstall an LDAP aware saslauthd daemon.

Step 4: Activate LDAP as SASL mechanism

Edit the /etc/sysconfig/saslauthd file to enable LDAP mechanism and add the -r switch to the daemon:

 SOCKETDIR=/var/run/saslauthd
MECH=ldap
FLAGS="-O /etc/saslauthd.conf"

Now, Start saslauthd:

# chkconfig saslauthd on
# service saslauthd restart

Step 5: Configure the communication between OpenLDAP and saslauthd

Update the /usr/lib64/sasl2/slapd.conf file to instruct OpenLDAP, how to connect to the SASL daemon. The communication between the two daemons are done through a mutex, configured like this:

pwcheck_method: saslauthd
saslauthd_path: /var/run/saslauthd/mux

Step 6: Add OpenLDAP user to sasl group (adapt names to your distribution settings):

usermod -a -G saslauth
ldap

Step 7: OpenLDAP configuration

Edit/Add OpenLDAP configuration file  /etc/openldap/slapd.conf to configure the SASL parameters:

sasl-host       localhost
sasl-secprops   none

Restart OpenLDAP:

# service slapd restart

Step 8: Test SASL authentication:

You can test the SASL part with this command:

# testsaslauthd -u cloud.ADM -p [email protected]

Step 9: Create an account in OpenLDAP:

Create a ldif file for new user creation:

 dn: uid=<User Name Here>,ou=People,dc=my-domain,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: <User Name Here>
uid: <User Name Here>
uidNumber: <UID_here>
gidNumber: 100
homeDirectory: /home/<User Name Here>
loginShell: /bin/bash
gecos: <User Name Here> [Admin (at) my-domain]
userPassword: {SASL}<User email ID Here>

Use the ldapadd command with the above file to create a new user in OpenLDAP directory.

ldapadd -x -W -D
"cn=ldapadm,dc=my-domain,dc=com" -f Vinoth.Selvaraj_9998.ldif

Sample .ldif file for your reference:

dn: uid=vinoth.selvaraj,ou=People,dc=my-domain,dc=com
objectClass: top
objectClass: account
objectClass: posixAccount
objectClass: shadowAccount
cn: vinoth.selvaraj
uid: vinoth.selvaraj
uidNumber: 9998
gidNumber: 100
homeDirectory: /home/vinoth.selvaraj
loginShell: /bin/bash
gecos: Vinoth.selvaraj [Admin (at) my-domain]
userPassword: {SASL}[email protected]

Congratulate Yourself!

Now, login to your CentOS server using your Active Directory credentials.

Reference Link below:

https://www.itzgeek.com/how-tos/linux/centos-how-tos/step-step-openldap-server-configuration-centos-7-rhel-7.html

https://gauvain.pocentek.net/docs/openldap-delegate-auth/

https://ltb-project.org/documentation/general/sasl_delegation

https://blogs.msdn.microsoft.com/alextch/2012/04/25/configuring-openldap-pass-through-authentication-to-active-directory/


Cheers,
Vinoth Kumar Selvaraj
07/Feb/2019

“nova.compute.resource_tracker” Out of Sync!

Have you ever noticed the metrics shown in the “free -m” on the compute node, and the output of “Nova hypervisor-show ” are not in sync?

Fig 1.0 – Screenshot comparing RAM usage reported by nova_tracker & “free -m” command.


Let see the reason behind “out of sync”!

The “Used_now” value shown in the “$nova hypervisor-show” output is the cumulative RAM blocked by the nova-compute service for the VM provisioned in that specific hypervisor (compute node) so far.

For example, let consider, Compute node [cmp054] which has total 161GB of physical RAM installed on it. As per the figure 1.1 attached below, we have 6 active running VMs reside on the same compute node cmp054.

Fig:1.1 – Screenshot of RAM allocated to each VMs in the compute node

The flavor size allocated for each VM is as follows,

VM1 – 16GB,
VM2 – 8GB,
VM3 – 32GB,
VM4 – 32GB,
VM5 – 32GB,
VM6 – 32GB,
———————
Total – 152GB

To avoid complexity, let assume the ram_allocation_ratio = 1.0 for our environment.

So considering the fact “ram_allocation_ratio” is set to 1.0 in nova.conf file, nova-compute service on cmp054 node would have already blocked 152GB of RAM out of 161GB available RAM for the existing active VMs on the cmp054 node. Now, when you try to provision a new VM with more than 9GB RAM, the Nova scheduler will find the cmp054 node invalid and throw error message stating, the cmp054 node has not enough resources available.

Notably, the RAM utilisation details that you witness from “free -m” command will give you the actual RAM utilisation on that compute node irrespective of RAM allocated for the guest VMs.

Fig 1.2 – Screenshot of actual RAM usage on compute node

In our case, “free -m” on cmp054 node shows, we have 14GB of RAM used (Refer screenshot 1.2 attached above). However, the Nova-resource_tracker on nova-compute log reporting that we have used 152GB of RAM(The same value that we see in $Nova hypervisor-show command). This is since the 6 active customers VMs provisioned with the big flavor of RAM but not running any heavy workload in it. So the actual RAM usage on host OS (compute node) remains low as 14GB.

Hence, the used RAM reported by the “free -m” and “Nova hypervisor-show” command NEED NOT be same.

P.S:-
OpenStack allows you to overcommit CPU and RAM on compute nodes. This will enable you to increase the number of instances running on your cloud at the cost of reducing the performance of the instances.

Ref:- https://docs.openstack.org/arch-design/design-compute/design-compute-overcommit.html

Regards,
Vinoth Kumar Selvaraj