Tale of how a Fortune 50 Giant got the right Identity Access Partner

As organizations constantly seek to expand their market reach and attract new business opportunities, identity management ( specifically SSO, User provisioning and Management) has evolved as an enabling technology to mitigate risks and improve operational efficiency. As the shift to cloud-based services increases, identity management capabilities can be delivered as hosted services to help drive more operational efficiency and to improve business agility. A Fortune 50 Giant designed and developed a Cloud based Identity Management Solution and proposed an opportunity to existing and prospective SaaS Vendors to have a tie up with its product to test and make the product live as a full-fledged Single Sign-on Solution. 8KMiles being a Cloud based Identity Services company accepted to fulfill the Client requirement.
This company opted for 8KMiles, the most preferred choice, because 8KMiles is a State of the Art Solution provider that practiced Scrum Methodology. 8KMiles never hesitated to take up any ad-hoc requirements because of their industry specific team of experts who are ready to offer 24/7 development support system. 8KMiles thus pitched in to help the client by finding their Pain Points & AS-IS Scenarios. Later 8KMiles worked extensively to collaborate with their company and its respective SAAS Vendors to:
1. Establish formal Business Relationship with SaaS Vendors
2. Pre-qualify SaaS Vendor
3. Configure SaaS Application for Partner company on Identity Cloud Service SAML SSO integration, Test and Certify
4. Prepare IDP Metadata
5. Establish a stringent QA process
6. Complete Documentation
a. Confirmance and interoperability test report
b. SAML SSO Technical documentation
c. A video explaining the steps involved in the integration
d. Provide metadata configuration and mapping attributes details
7. Build Monitoring Tool
8. Adopt Quality Assurance with 2 level Testings (Manual & Automation)
9. Configure, integrate, troubleshoot, monitor and produce reports using 8KMiles MISPTM tool.

Thus, 8KMiles enabled this Fortune 50 Biggie to attain the following business benefits:
• Refinement of user self-service functionalities
• Activation users & groups and linking SaaS applications to the user accounts in the cloud
• Enablement of SSO to these SaaS Apps & enable user access via SAML2.0
• Usage of OAuth 2.0 to authorize changes to configuration.
• Adoption & Testing of different methods of SSO for the same SaaS App
• Documentation of the process in a simplistic manner
• Automation to test & report on all aspects of the integration without human involvement
For more information or details about our Cloud Identity access solution, please write to sales@8kmiles.com

Author Credit:  Ramprasshanth Vishwanathan, Senior Business Analyst- IAM SBU

LifeSciences Technology Trends to expect in 2017

There is a constant change in Life Sciences industry dynamics especially in terms of handling the ever growing data, using modern cloud technology, implementing agile business models and alignment with the compliance standards. Here are some of the Lifesciences Tech trends that are predicted for this 2017.

1) Cloud to manage Ever-growing Data

The growing volume of data is one of the major concerns amongst the Life science players. There is a constant need to manage and optimize this vast data into actionable information in real time and this where cloud technology will give the agility required to achieve this. Life sciences will continue to shift to cloud to address the inefficiencies and streamline and scale their operations.

2) Analytics to gain importance

Data is the key driver for any Pharma or Lifesciences organization and will determine the way drugs are developed and brought to market. The data are generally distributed and fragmented as clinical trial systems, databases, research data, physician notes, hospital records, etc and analytics will aid to a great extent to analyze, explore and curate these data to realize real business benefits out of this data ecosystem. Year 2017 will see a rise in trends like Risk analytics, Product failure analytics, drug discovery analytics, supply disruptions predictive analytics and Visualizations.

3) Lifesciences and HCPs will now go Digital for interactions

There was a time when online engagements were just a dream due to limitations in technology and regulations. Embracing a digital channel will open up faster mode of communication amongst Lifescience players, HCPs and consumers. These engagements are not only easy and compliant but are integrated with applications to meet industry requirements. This will also aid life sciences players reach more HCPs and also meet customer’s growing expectations for online interactions

4) Regulatory Information Management will be the prime focus

When dealing with overseas market it is often very critical to keep track of all the regulatory information at various levels. Many a times information on product registrations, submission of content plans, health authority correspondence, source documents to published dossiers etc, are disconnected and are not recorded at one centralized place. So programs that aid in alignment and streamlining of all regulatory activities will gain momentum this year.

To conclude, Daniel Piekarz, Head of Healthcare and Life Sciences Practice, DataArt stated that, “New start-ups will explode into the healthcare industry with disruptive augmented reality products without the previous limitations of virtual reality. As this technology advances the everyday healthcare experience, it will exist on the line between the real world and virtual in what is being called mixed reality.” Thus 2017 will see a paradigm shift in the way technology will revolutionize Life Sciences players’ go-to market leading to early adopters of the above gaining the competitive edge and reaping business benefits as compared to laggards!

Identity Federation – 10 Best Practices from a User’s perspective

Federation is a concept which deals with connection of two parties/providers (Identity Provider (IDP) and Service Provider (SP)). One vetting the credentials of the user and the other providing a service to the user depending upon the successful vetting of the credential by the first provider.  While setting up these federations, certain best practices can be followed by the two parties that would make the federation experience holistic for a user. This blog post explores and highlights these practices.

Let us start with the SP side, as this is where the user lands after a federation. The following are some of the best practices to be followed on the SP side.

  1. If the user has reached the SP for the first ever time, it will be good to make sure (with the consent of the user and with due thought to the user’s privacy), if some identifying information/data (like the immutable id, email id, etc.) of the user can be stored in the SP. This allows the user’s subsequent visits to be tied to it. This may be needed in order to ensure that the user gets a better service experience at the SP each time.  If the intention of the federation is not to expose/tailor user/usage specific sites, then this need not be followed.
  2. The SP should be able to link the user to multiple applications protected by the SP, with the identifying information from the federated transaction, preferably immediately after federation time, in order to establish continuity of services that the particular user was offered last time they logged in to the SP applications and/or tailor the application’s preferences to the federated user’s profile.
  3. Wherever possible it will be better to use local provisioning or remote provisioning of the user at the SP. Critical aspects like security, privacy organization’s policy in handling external users and their attributes dictate which type of provisioning would be best.    This provisioning process again would help speed up the user experience at the SP application and also will assist in giving a better service to the same returning user.
  4. Sending the right assertion parameters to the downstream application.

This is critical, as some of the vital information such as role information, auxiliary user attributes, preferences that the application requires need to be passed on appropriately to the application.  The application might be making important decisions based on these parameters in order to address the user’s needs correctly.

  1. Redirect to appropriate URLs at the Service Provider in both the cases of “User Success” or “User Failure” to get to those URLs. Failure could be because of the following reasons:

a) User not having the right role, privilege or permission to access the site or part of the site, as the assertion did not have them

b) User got authenticated correctly at the IDP, but IDP failed to send the right assertion to the SP

c) Failure of user disambiguation process at the SP

d) User unable to be linked to the right accounts at the SP

In each case, if the Failure URL gives an appropriate error message to the user, the user would know exactly why he could not access the resource. Ticketing software would probably help the user generate a ticket for the same and get a solution for the failed transaction from the SP.

Let us now focus on the IDP side, as this is where the user usually authenticates in order to reach an SP application in a federation.   The following are some of the best practices to be followed on the IDP side:

  1. Most important thing is for the IDP to display an error that is meaningful to the user, if and when his/her authentication fails at the IDP. This would make it easier for the user to know if it was any credential issue, network issue, domain issue or some other issue that made the authentication process fail.
  2. The IDP should mention to the user (either in their website or application) what the supported types of credentials allowed for authentication are. This could vary from userid/password to X509 Certificates, smartcards, OTPs or other hardware/software tokens.   The user interface should appropriately lead the user to the right type of authentication, using the right type of credentials, depending on the type of service he/she wishes to get from the IDP.
  3. The IDP would be able to issue assertions to the SP, that contains details like Level of assurance at which the user credential was accepted, other user attributes like role, user preferences etc., if applicable. This is other than the primary subject information that the IDP is contracted to send to the SP, during the initial metadata exchange.  These extra attributes would help the SP and its applications to tailor their user preferences.
  1. In the case of IDP supporting a particular profile and service, the IDP should support all the possible standard options/features linked with these profiles/services. Otherwise the users should be let known, what is supported.  This is to ensure that the sure users would not be misled, if they assume that all related option/features are supported.  For example:

a) If the IDP is supporting IDP-initiated Browser Post Profile, then it would be better if it supports IDP initiated Single Logout, Common Name ID Formats linked with the Browser Post Profile, Signing and Encryption of Assertions, Protocol Response in POST Formats etc.

b) if the IDP is supporting SP-initiated Browser Post Profile, then it would be better if it supports IDP or SP initiated Single Logout, Common NameID Formats, Signing and Encryption of Assertions, Protocol Response in POST Formats, Relay State, Accept AuthenRequest in GET and POST Formats, support “allow create” of new IDs if an ID is not already present for a federation transaction etc.

c) if the IDP is supporting multiple Protocols and features such as delegated authentication, redirection to other IDPs, etc., it should clearly mention the protocols and the corresponding profiles, features supported in each of the IDP supported website/application.

d) If exclusively a particular feature is not followed or supported by the IDP, it should be clearly mentioned by the IDP to its users.

All the above should be provided in laymen terms, so that the user can understand what features are supported and what are not.

  1. IDP should clearly mention the conditions associated with privacy clause/rules/protection with respect to user credentials/identities and their secure transport. This is to keep the user informed about how their credentials will be used. It also highlights the protection measures followed to make the federated transaction secure.

 

Author Bio:
Raj Srinivas, the author of this blog is an IAM and Security Architect with 8K Miles.   His passion includes analyzing problems that enterprises have in the IAM & Cloud Security domain, from various verticals that include Banking, Insurance, HealthCare, Government, Finance & Mortgage and provide in-depth solutions that will have far-reaching effects for the enterprise.

SaaS Data Security More Critical Now Than Ever Before in Healthcare

If, as healthcare payer and provider, you are using Software-as-a-Service (SaaS) solutions to provide better service to your patients and customers, data security might be as critical to you as your business. Healthcare industry has shifted to cloud based solutions to maintain electronic Protected Health Information (ePHI), and hence considering the sensitivity of information, it has become more important now than ever before.

In order to keep pace with growing demand, healthcare industry has faced the heat to provide faster, better, and more accessible care by adopting new technologies while complying with industry mandates like the Health Insurance Portability and Accountability (HIPAA) Act and Health Information Technology for Economic and Clinical Health (HITECH) Act.

Why Healthcare needs Data Security in SaaS applications?

It is because of the astonishing number of data breaches and attacks on healthcare data that has forced involved organizations to look for higher and stronger methods of data security at various levels, be it at physical level or application level.

According to a recent study by Symantec Corporation, approximately 39 percent of breaches in 2015 occurred in the health services sector. The same report found that ransomware and tax fraud rose as increasingly sophisticated attack tactics were being used by organized criminals with extensive resources. These criminals utilize professional businesses and adopt best business practices to exploit the loopholes prevailing in the security of ePHI. They first recognize the vulnerabilities and then exploit the weakness of unsecured system. The stolen health records are then sold in black market for ten times more value than that of stolen credit card.

In a statement given by Kevin Haley, director, Symantec Security Response, he said, “Advanced criminal attack groups now echo the skill sets of nation-state attackers. They have extensive resources and a highly-skilled technical staff that operate with such efficiency that they maintain normal business hours and even take the weekends and holidays off.”

Loopholes in Healthcare Data Security

Public cloud services are cost-efficient because the infrastructure often involves shared multitenant environments, whereby consumers share components and resources with other consumers often unknown to them. However, this model has many associated risks. It gives one consumer a chance to access the data of another and there is even a possibility that data could be co-mingled.

Cloud services allow data to be stored in many locations as part of Business Continuity Plan (BCP). It can be beneficial in case of an emergency such as a power outage, fire, system failure or natural disaster. If data is made redundant or backed up in several locations, it can provide reassurance that critical business operations will not be interrupted.

However, consumers that do not know where their data resides lose control of ePHI at another level. Knowing where their data is located is essential for knowing which laws, rules and regulations must be complied with. Certain geographical locations might expose ePHI to international laws that change who has access to data in contradiction to HIPAA and HITECH laws.

Many employees use their smartphones that do not have the capability to send and receive encrypted email. So, while answering emails at home from their phone, employees may be putting sensitive data at risk.

Bring Your Own Device (BYOD) policies also put data at risk if devices are lost or stolen. Logging on to insecure internet connections can also put business and patient information at risk. Storing sensitive data on unsecured local devices like laptops, tablets or hard drives can also expose unencrypted information at the source.

Conclusion

It is obvious from such startling statistics that large number of data breaches and cyber-attacks can occur only if the applications and storage of data are not secure. Also, all the employees involved should be given unique username and password and must be trained on how to keep login credentials secure apart from training sessions on Privacy and Security Rules.

Transferring data to the cloud comes with various issues that complicate HIPAA compliance for covered entities, Business Associates (BAs), and cloud providers such as control, access, availability, shared multitenant environments, incident readiness and response, and data protection. Although storage of ePHI in the cloud has many benefits, consumers and cloud providers must be aware of how each of these issues affects HIPAA and HITECH compliance.

The need of the hour is that all the involved parties must come together and take the responsibility of data security from their end till next level.

It is better to invest in securing SaaS applications and medical data instead of paying huge fines which could be in millions of dollars!

Related Posts :-

Steps to HIPAA Compliance for Cloud-Based Systems

Why Healthcare Organizations Need to Turn to Cloud

Steps to HIPAA Compliance for Cloud-Based Systems

The rapid growth of cloud computing has also led a rapid growth in concerns pertaining to security and privacy in cloud-based infrastructure. Hence, such fears create a huge requirement to understand and implement cloud computing for healthcare organizations, while being compliant with the Health Insurance Portability and Accountability Act (HIPAA).

The benefits offered by cloud-based technology are too good to let go. The agility and flexibility that can be gained by utilizing public, private, and hybrid clouds are quite compelling.  We need cloud based environment that can provide secure and HIPAA compliant solutions.

But, how do you achieve HIPAA compliance with cloud?

HIPAA

Image Source: Mednautix

Follow below steps to better understand how to ensure HIPAA compliance and reduce your risk of a breach.

1.      Create a Privacy Policy

Create a comprehensive privacy policy and make sure your employees are aware of it.

2.      Conduct trainings

Having a privacy policy in place wouldn’t be enough. You would require to make sure that they are implemented as well. For that employees must be given all required trainings during the on-boarding process. You should also require this training for all third-party vendors. Develop online refresher courses in HIPAA security protocols and make it mandatory for all employees and vendors to go through such courses at regular intervals.

3.      Quality Assurance Procedure

Make sure all the quality assurance standards are met and are HIPAA compliant. Conduct surprise drills to find out loopholes, if any.

4.      Regular audits

Perform regular risk assessment programs to check the probability of HIPAA protocol breach and evaluate potential damage in terms of legal, financial and reputational effects on your business. Document the results of your internal audits and changes that need to be made to your policies and procedures. Based on your internal audit results, review audit procedure and update with necessary changes.

5.      Breach Notification SOP

Create a standard operating procedure (SOP) document mentioning details about what steps should be taken in order to avoid a protocol breach. Mention steps to be followed in case a patient data breach occurs.

Most often you would have a cloud service provider who will take care of your wide range of requirements ranging from finding resources, developing apps & hosting them to maintenance of cloud based infrastructure. While the primary responsibility of HIPAA compliance falls on healthcare company, compliance requirements can extend to the cloud service provider as “business associates”.

Are your cloud service providers HIPAA business associates?

Figuring out if your cloud service provider can be considered as HIPAA business associate can be tough. The decision may vary depending on the type of cloud usage. Considering that the cloud provider agency is an active participant, it must also adhere to security requirements, such as including encryption, integrity controls, transmission protections, monitoring, management, employee screening and physical security.

Investing in HIPAA compliance procedures can save you from many hassles. Follow these steps and minimize your risk of being found noncompliant.

Ransomware on the Rise: What You Can Do To Protect Your Organisation From The Attack

Ransomware is malicious software used by the cyber criminals to hold your computer files or data and demand for a payment from you to release the data back. This is the popular method used by malware authors to extract money from organisations or individuals. Different ransomware varieties are used to get on to a person’s computer, but the most common technique is to install a software or use social engineering tactics, like displaying fake messages from law enforcement department, to attack on a victims computer. The criminals do not restore the computer access until the ransom is paid.

Ransomware is very scary as the files once damaged are almost beyond repair. But you can overcome this attack if you have prepared your system. Here are a few measures that will help you to protect your organisation from the attack.

Data Backup

To defeat ransomware, it is important to regularly backup your data. Once you get attacked, you will lose all your documents; but if you could clean your machine, restore your system and other lost documents from backup then you need not worry. So backup the files to an external hard drive or backup service, then you should can turn off your computer and start over with a new setup after attack.

Use Reputable Security Precaution

Using both antivirus software and a firewall will prevent you. It is critical to keep the software up-to-date and maintain a strong firewall, otherwise the hacker might easily exploit through security holes. Also purchase antivirus software from a reputable company because there are many fake software.

Ransomware Awareness Training

It is important to be aware of the cyber security issues and get properly trained to identify the phishing attempts. Creating awareness to staffs will help them to take action and deal with the ransomware. As the methods used by hackers constantly change it is necessary to keep your users up-to-date. Also, it is tough for untrained users to question the origin of a well-crafted phishing email. So, providing security training to staffs is the best way to prevent malware infection through social engineering.

Disconnect from Internet

If you are suspicious about a file or receive a ransomware note then immediately stop communicating with server. By disconnecting from the internet you might lessen the damage, as it takes some time to encrypt all your files. This isn’t foolproof but disconnecting from internet is better than nothing. As you can always re-install software if you have backed up your data.

Check File Extensions

Always see the full file extension, it helps to easily spot suspicious files. If possible try to filter the files in your mail by extension, like you can deny mails sent with ‘.EXE’ files. In case you exchange .EXE files in your organisation then it is better to use ZIP files with password-protection.

Exercise Caution, Warn Authorities, Never Pay

Avoid any links inside emails and suspicious websites. It is better to use another computer to research details if your PC falls under attack. Also, inform the local FBI or cybercrime about the attack. Finally, never pay them as it would be a mistake because they may continue to further demand from you and will not release your information as well. So, taking precautions to protect your data and being alert are the best ways to prevent ransomware attack.

In reality, dealing with ransomware requires an effective backup plan so you could protect your organisation from the attack.

Why Healthcare Organizations Need to Turn to Cloud

It is important for every healthcare organization to develop an effective IT roadmap in order to provide best services to customers and patients. Most healthcare payers and providers are moving to cloud based IT infrastructure in order to utilize the benefits that were once considered unimaginable.

But, before moving ahead, let’s check out some industry statistics and research studies.

Healthcare Organizations and Cloud Computing Statistics

Healthcare Organizations and Cloud Computing Statistics

Source: Dell GTAI

According to Dell’s Global Technology Adoption 2015, adoption of cloud technology increased from 25% in 2014 to 41% in 2015 alone.

Spending on cloud computing or in simpler terms – hosted medical services – in global healthcare was $4.2bn in 2004, but this will grow by 20% every year until 2020, reaching $12.6bn.

North America is the biggest consumer of cloud computing services and by 2020 its spending on cloud based solutions will reach $5.7bn.

What kind of data can be moved to Cloud?

Critical healthcare applications can be hosted on cloud platform in order to increase their accessibility and availability. Apart from them, below mentioned hardware, software and data can also be moved to cloud.

  • Email
  • Electronic Protected Health Information (ePHI)
  • Picture archiving and communication systems
  • Pharmacy information systems
  • Radiology information systems
  • Laboratory information systems.
  • Disaster recovery systems
  • Databases & Back up data

Why Healthcare Organizations should move to Cloud?

1.      Low Cost

Healthcare organizations can reduce IT costs to a significant extent by moving to the cloud. Cloud based software require lesser resources for development and testing. This implies fewer resources for maintenance and more robust solutions at a lesser cost. It is believed that over a period of 10 years, cloud based applications cost 50% lesser than traditional in-house hosted applications.

2.      More Accessibility

It is important that healthcare data is available to doctors as quickly as possible so that they can diagnose and analyze the situation of patient soon and take the right steps to improve the condition. Cloud computing improves web performance for users in remote locations as well without having to build out additional data centers.

3.      Higher Flexibility

Cloud based platform allows organizations to scale up or down based on their needs. With conventional on-premise hosted solutions, it can be tough to align their physical infrastructure quickly to varying demands. Migrating to cloud can help to deploy scalable IT infrastructure that can adjust itself as per the requirements, making sure that the resources are always available when required.

4.      Improved Efficiency

Moving to cloud also helps to avoid money being spent on infrastructure to be under-utilized. With early access to wide range of data, businesses can gather valuable insights about the performance of systems and plan their future strategy accordingly. Pharmaceutical companies, hospitals and doctors can focus on their core objective – giving the best possible treatment and service to patient – while the cloud service providers take care of their IT needs.

5.      More Reliability

Cloud based software remains available 24*7 from anywhere to any authorized personnel having an internet connection. Apart from that, it is easier to recover from loss due to natural disasters because of its distributed architecture.

Conclusion

The cloud’s resiliency and high availability make it a cost-effective alternative to on-site hosted solutions. However, security has been a major barrier to cloud adoption in many verticals. It’s especially critical in healthcare industry which is regulated by HIPAA and HITECH Acts and plays a major role in such organizations’ decisions to move their data into a public cloud app.

7 Tips to Save Costs in Azure Cloud

Cloud computing comes with myriad benefits with its various as-a-service models and hence most businesses consider it wise to move their IT infrastructure to cloud. However, many IT admins worry that hidden costs will lower their department’s total cost of ownership.

We believe that it is more about estimating your requirements correctly and managing resources in the right way.

Microsoft Azure Pricing

Microsoft Azure allows you to quickly deploy infrastructures and services to meet all of your business needs. You can run Windows and Linux based applications in 22 Azure data center regions, delivered with enterprise grade SLAs. Azure services come with:

  • No upfront costs
  • No termination fees
  • Pay only for what you use
  • Per minute billing

You can calculate your expected monthly bill using Pricing Calculator and track your actual account usage and bill at any time using the billing portal.

How to save cost on Azure Cloud?

  1. Azure allows you to set a monthly spending limit on your account. So, if you forget to turn off your VMs, your Azure account will get disabled before you run over your predefined monthly spending limit. You can also set email billing alerts if your spend goes above a preconfigured amount.
  2. It is not enough to shut down VMs from within the instance to avoid being billed because Azure continues to reserve the compute resources for the VM including a reserved public IP. Unless you need VMs to be up and running all the time, shut down and deallocate them to save on cost. This can be achieved from Azure Management portal or Windows Powershell.
  3. Delete the unused VPN gateway and application gateway as they will be charged whether they run inside virtual network or connect to other virtual networks in Azure. Your account will be charged based on the time gateway is provisioned and available.
  4. At least one VM is required to be running all the time, with one reserved IP included in 5 reserved public IP in use, in order to avoid reserved IP address charges. If you down all your VMs in service, then Microsoft is likely to reassign that IP to some other customer’s cloud service, which can hamper your business.
  5. Minimize the number of compute hours by using auto scaling. Auto scaling can minimize the cost by reducing the total compute hours so that the number of nodes onAzure scales up or down based on demand.
  6. When an end-user’s PC makes a DNS query, recursive DNS servers run by enterprises and ISPs cache the DNS responses. These cached responses don’t incur charge as they don’t reach the Traffic Manager Name servers. The caching duration is determined by the “TTL” parameter in the original DNS response. With larger TTL value, you can reduce DNS query charges but it would result in longer end-user failover times. On the other hand, shorter TTL value will reduce caching resulting in more query counts against Traffic Manager Name server. Hence, configure TTL in Traffic Manager based on your business needs.
  7. Blob storage offers a cost effective solution to store graphics data. Blob storage of type Table and Queue of 2 GB costs $0.14/month and type block blob costs just $0.05/month.

SQL Database

A SQL Database of similar capacity will cost $4.98/month. Hence, use blob storage to store images, videos and text files instead of storing in SQL Database.

SQL Database

To reduce the cost and increase the performance, put the large items in the blob storage and store the blob record key in SQL database.

Above tips will definitely help you cut cost on Azure and leverage the power of cloud computing to the best!

8K Miles Tweet Chat 3: IAG Issues and Solutions

8K Miles organised a Tweet chat on IAG issues and solutions on May 10th, if you have questions related to IAG for your organisation or wish to understand IAG better this blog is the right place. Go through this blog which is a recap on what happened during the Tweet chat, as we compile all the questions asked and answers given by the tweet chat participants. The official twitter handle of 8K Miles being @8KMiles shared frequently asked questions on IAG issues and solutions which were discussed and answered by the participants.

Tweet Chat  Q1

A1 Answer 2

A2 answer1 - IAG issues

A3 ANSWER1

 

A4 answer

A5 answer

A6 answer

A7 answer

 

A7 answr2

 

A8 answer

 

a9 answer

A10 ANSWER

It was an informative chat on IAG issues and solutions. For more such tweet chats on cloud industry follow our Twitter handle @8KMiles.

7 Common AWS Cost Issues and How You Can Fix Them

Cloud solutions offer significant business benefits for startups as well as established enterprises. To help the cloud setup, Amazon Web Services (AWS) delivers brilliant cloud infrastructure solutions with pay-per-use service and other computing resources with respect to the growing needs of a business. However, even with all the benefits there still exist cost related issues. Even though AWS model saves building and maintenance cost there are cost management issues which users encounter while using cloud. So keeping cost management in mind, we have the 7 common AWS cost issues and how you can fix them.

Resource Purchase

Remember to check your resource utilization before purchasing. The reserved, on-demand and spot instances should be purchased appropriately depending on use and risks; as spot instances have termination risk or reserved instances become inactive due to improper mapping. For long term usage, use reserved EC2 instances.

Instance Size

Remember to analyze your needs and choose the appropriate size rather than sizing for the highest demand. As the size may vary, like large, medium or small, so do not choose the defaults. You can use autoscaling services to manage high load for certain period of time. Also, save cache or non-critical data from application into non-persistent storage, instead of increasing the size of the Elastic Block Storage (EBS).

EC2 Utilization

The Elastic Compute cloud (EC2) is charged as per their usage time even if the EC2 instance is using less than destined capacity or sitting idle. You can identify the idle and underutilized instances and analyze the CPU utilization and network activity. If these data hit low then EC2 instance should be flagged. So you could contact the instance owner and verify if the instance is needed or not or the correct size. Shutdown the instances when they are not needed. This can help you reduce cost. Also find way to reuse the unused reserved instances.

Using elastic cache to store cache information from application and database reduces the instance CPU utilization and bandwidth; this allows us to minimize the bandwidth usage and thus reduces the cost. Also, you can use ECS service on underutilized EC2 instances to increase the workload and efficiency of the Instances, instead of launching new resources.

S3 Lifecycle

Keep an eye on your object storage and regularly track the following: what storage you have, where the storages are and how you are storing it. By using the Simple Storage Service (S3) lifecycle you can control and reduce the storage costs. During expiration and transition of object storage class to RRS and Glacier you can reduce your S3 and storage costs. The data that are no longer needed or need to be highly available can be deleted or moved to Glacier storage using the S3 Lifecycle Policy.

Use Glacier to archive data for longer time period and plan the data retrieval process from Glacier, do not retrieve data frequently.

Data Transfer Charges

It is important to constantly track the data transfer charges as they could cause unnecessary expenses. Maintaining a precise resource inventory on ‘what data is transferred’ and ‘where’ (i.e. to which region) would prevent money wastage on data transfer.

AWS Support Services

The EC2 hourly charges are greater for many users than the pay-as-you-go usage charges. So using AWS support services like ELB and pay-as-you-go would help to reduce cost. Analyze your costs and check if these services are effective for your usage.

Remove Resources

Detach elastic IPs from instances that are in stopped state and release the other unattached IPs. Also, delete older and unwanted AMIs and snapshots as well as delete snapshots of deleted AMIs. These resources should be tracked regularly so they don’t get missed among the many other resources. Also individually these items cost less but together they create large expense. Meanwhile in AWS environment, all resources are accounted even if they are inactive so it is important to turn off the unused ones. So take snapshot of unused RDS instance (if needed) and terminate the RDS instance. So keep track and remove all unwanted RDS manual snapshots and all unused resources.

Though AWS is a dynamic and effective cloud service it is important to regularly check the progress manually or via automated reports to avoid mistakes. As well as these 7 cost issues can be avoided by regularly monitoring the AWS services, which will largely help to reduce the cost.

Cloud Boundaries Redefined in AWS Chennai Meetup on 30th April @8KMiles

AWS Chennai Meetup
“You don’t have to say everything to be a light. Sometimes a fire built on a hill will bring interested people to your campfire.” ― Shannon L. Alder

This is one of the days where the above quote is proven to be right. As a market leader in delivering quality Cloud solutions, 8K Miles has this habit of stretching every new service offered by different cloud service providers to explore and solve the contemporary business problems. In yet another effort in that direction, we had a bunch of technical evangelists and architects gathering at 8K Miles today for the #AWSChennaiMeetup event, to discuss two broad areas on AWS architecture designs.

1) The Pros and Cons of Architecting Microservices on AWS

2) Cloud Boundaries redefined: Running ~600 million jobs every month on AWS

AWS Chennai MeetUp I Session

Session 1: Pros and Cons of Architecting Microservices on AWS

This topic was discussed by Sudhir Jonathan from Real Image. Sudhir works as a consultant to Real Image, on the teams that build Moviebuff.com and Justickets.in. His history includes ThoughtWorks, Own Startup and a few personal projects. He is an avid coder and specialities includes Ruby on Rails, Go, React, AWS and Heroku and  a few.

AWS Chennai Meetup

His valuable knowledge sharing session started with the Pros and Cons of Architecting Microservices on AWS, also covering automated deployment, inter process communication using SQS, ECS, cost reductions using spot instances, ELB and Autoscaling groups.

Session 2: Cloud Boundaries redefined: Running ~600 million jobs every month on AWS

In the world of cloud “Speed is Everything”.  To identify various security, compliance, risk and vulnerability drifts instantly on our customer environment, 8K Miles  cloud operations team runs ~600 million jobs every month.  Mohan and Saravanan – the technical architects of 8K Miles shared their experience in running distributed and fault tolerant scheduler stack and how it has evolved.

AWS Chennai Meetup

During the event we also organized a simple tweet quiz in our handle @8KMiles for all the participants. Dwarak discussed each question in detail with all the participants.

AWS Chennai Meetup

For more detailed updates on this event, please check the hashtag #AWSChennaiMeetup and our handle @8KMiles in Twitter
**Chennai Amazon Web Services Meetup, is organized by AWS Technology Evangelists from Chennai for AWS Cloud Enthusiasts. The goal is to conduct meetups often, share and learn the latest technology implementations on AWS, the challenges, the learnings, the limitations etc.

8K Miles is a leading Silicon Valley based Cloud Services firm, specializing in high-performance Cloud computing, Analytics, and Identity Management solutions and is emerging as one of the top solution providers for the IT and ITIS requirement on Cloud for the Pharma, Health Care and allied Life Sciences domains.

 

 

Demand for Cloud EHR is Increasing Rapidly

Considering the changing landscape of requirements in healthcare data management, the new cloud-based Electronic Health Record (EHR) has seen a rapid growth in demand for various reasons. When Epic systems declared to acquire Mayo Clinic data center for $46 million, it instated the belief the demand for cloud EHR is increasing steadily in healthcare domain.

In 2016, the demand for cloud-based technology solutions that assist medical practitioners to deliver better care while reducing administrative burdens is expected to gain momentum.

What is cloud-based EHR?

EHR is a collection of electronic health data of individual patients or populations. It includes medical history, demographics, genetic history, medication and allergies, laboratory test results, age/weight/blood group, x-ray images, vital signs, etc. in digital format and is capable of being shared across different stakeholders.

Cloud-based EHR allows software and clinical data to be stored, shared and updated in the cloud, whereas traditional EHR systems usually allow information to users that are in the same physical location as software and servers.

Putting it in simpler words, cloud EHR allows accessing and working with data available hosted at a shared online location, rather than on a personal disk drive or local server. All software and information is stored exclusively on an online network (also known as “in the cloud”). Any authorized user having internet connection can have access to this information.

Why demand for cloud EHR has increased?

With the existing demand for cloud EHR solutions, it is expected that the market of EHR will be about $30 billion by 2020 and is only expected to grow further.

graph-01
Source: Grand View Research

This demand is primarily driven by increased need for anytime-anywhere accessible software solutions that reduce errors and increase ease of use.

geographic analysis

Legacy on-premise solutions are unable to meet the changing requirements of today’s healthcare sector. They are built on outdated client-server systems that are costly, inflexible and can’t meet the need to analyze data on real-time basis. Such issues pose a significant challenge to healthcare providers who work with complex and disconnected datasets.

As compared to traditional on-site hosted solutions, cloud computing offers benefits such as:

  • Cost Reduction

Cloud based software require less development and testing resources, which implies lesser cost for support and maintenance of applications.

  • Improved Efficiency

Cloud solutions can automate many business processes such upgrade of systems. Being able to understand the bigger picture in real time allows you to focus on your core strengths.

  • Accessibility

Users can access applications from anywhere and any device, thus breaking down barriers of geography, thereby improving the speed with which decisions need to be taken.

  • Flexibility

Cloud-based network can easily scale, accommodate and respond to a rapid increase in the number of users and spikes in demand.

  • Reliability

Cloud computing allows applications to run independent of hardware through a virtual environment running out of secure data centers.

Today’s technological capabilities have made it possible to make health records more attractive to end users. Cloud based EHR solutions with visually appealing interfaces and innovative methods of interpretation, analysis and presentation of health records has been successful in improving the doctor-patient relationship.

Related post from 8KMiles

Top Health IT Issues You Should Be Aware Of

How Cloud Computing Can Address Healthcare Industry Challenges

How pharmaceuticals are securely embracing the cloud

5 Reasons Why Pharmaceutical Company Needs to Migrate to the Cloud

6 Reasons why Genomics should move to Cloud

AAEAAQAAAAAAAAbMAAAAJGVkYWJjYzI1LTc5ODktNGNiMC1hYTAyLWE5ZTFmYWU3OThkZQ

In the exciting, dynamic world of Genomics**, one witnesses path-breaking discoveries being made every day. On a mission to empower Pharmaceutical and Health Care industries with deeper understanding of the genome*, their activities and likelihoods of mutation, research activities in Genomics generate massive amounts of very important, significant data.

Research in Genomics churns out solutions, which means, a vast amount of useful information, with which identification, treatment, and prevention of numerous diseases and disorders could be realized with improved efficiency. Now, think about advanced Gene therapy and molecular medicine!

This enormous range of data and information needs a system that is not just capable of handling the colossal data load but also preserve the same with highsecurity and managed accessibility options.

  1. Large-scale Genome sequencing, Comparative genomics andpharmacogenomics require storage and processing of enormous volumes of data, to derive valuable insights that facilitate gene-mapping, diagnosis and drug discovery.
  2. The exhaustive genome database on a perpetual expansion mode simply exceeds the capacity of existing on-premise data storage facilities.
  3. In addition, the research-intensive domain requires managed services for user governance, access management and data encryption, which require synchronized efficiencies and compatibility with multiple systems that comply with international best practices and standard protocols.
  4. Cloud Architecture, empowered by scalability and elastic efficiencies, provides virtual storage spaces for the expansive genome database, with assisted migration, accessibility and security implementation in place.
  5. Large scale data-processing, storage, simulation and computation could be achieved on virtual laboratories on the cloud.
  6. Last but not the least, Cloud solutions for Genomics could be configured to fit specific research and standardized protocols requirement, rendering huge advantages, in terms of flexibility, compliance with protocols and regulatory standards, cost-savings and time efficiencies.

The leading Silicon Valley based Cloud Services firm, 8K Miles, specializes in high-performance Cloud computing solutions to Bioinformatics, Proteomics, Medical informatics and Clinical Trials for CROs, emerging as one of the top solution providers for the IT and ITIS requirement on Cloud for the Pharma, Health Care and allied Life Sciences domains.

*A Genome is the collection of the entire set of genes present in an organism.
**Genomics is the branch of science that deals with the structure, function, evolution and mapping of genomes.

  • April 20, 2016
  • blog

Enhancing patient care with well-defined Identity Access Governance services

AAEAAQAAAAAAAAbBAAAAJDE0MWFmYTE3LTkxM2UtNGZmYy05YmRiLTQ4ZjdmZmNiNTg3Yg

Richard Branson had spoken well in his recent tweet –
“…the only mission worth pursuing in business is to make people’s lives better.”

More so when it comes to Health Care IT. There prevails a strong moral responsibility in providing and protecting health care data, particularly of those enclosed in Electronic Health/Medical Records (EHRs). There are multiple opinions on the grant of primary ownership and access to these records, as the data is highly personal and hence sensitive and confidential.

Interoperability in Health Care IT is the ability of different IT systems and Apps to communicate, exchange and use data. This comes in as a boon for those who have to keep up with a change of residence/doctors/hospitals/health care providers, for there are high chances that no two places operate with the same IT infra system. With this, comes along the need for the above identities/personalities to interact in a secure manner, which could be monitored/managed in an efficient way. This is achieved by a comprehensive set of Identity & Access Governance (IAG) services.

Ideally, IAG should be designed in such a way that it effectively answers the following five questions:

  1.  Is your System built on an Anti-Hackable environment  ?
  2. Do you protect patients’ records? If yes, how?
  3. Who are the intended users of an EHR? Is the User given permission to access and own the resource? If yes, how? If no, why?
  4. Do you restrict users from accessing a particular portal, for security reasons, on justified grounds?
  5. How would you monitor employees within a health care facility, to check if s/he still has access to resources tied to her/his past role in the organization?

The HIPAA Security Rule requires that a user or entity accessing patient health information (PHI) be authenticated before such access is granted. IAG services, therefore, should implement security measures sufficient to reduce risks and vulnerabilities to a reasonable and appropriate level.

The leading Silicon Valley based Cloud Services firm, 8K Miles, effectively addresses the golden Security Rule via elaborate risk analysis and assessment, thereby helping healthcare service providers to implement reliable, real-time IAG services and solutions, be it on the cloud or on-premise data centres, emerging as one of the most trusted solution providers for the IT and ITIS requirement for the Health Care, Pharma and allied Life Sciences domains.

  • April 20, 2016
  • blog

Summary of Chennai Azure Global Bootcamp 2016

Summary of Chennai Azure Global Bootcamp 2016

The Chennai Global Azure bootcamp 2016 happened on Apr, 16th 2016 went on really well with lots of technical session and handson labs. We received more than 1000 registrations from people with variety of background, some of them were practicing professionals, few from IT Pro background and lots of students who are aspiring to be a cloud professionals soon.
The event started at 10.00 am and the keynote was delivered by Balaji Uppili, Chief Delivery Officer ,GAVS. He gave a lightning talk on the current cloud landscape and how azure is leading the game also touched little upon how developers must equip themselves to stay relevant in the ever changing IT. Soon after the keynote, presenters started offering session in 2 tracks, 1. Developer 2. IT Pro

We received 10 fantastic speakers from Chennai and Bangalore who delivered various tracks on various topics including Azure Apps Service, Open Source, Big data. I delivered a topic on Azure Data lake Analytics and Data lake store, the services which are currently in preview, but the attendees were able to recognize the value of the services and how it can help developers to leverage these big data analytics services and exploit big data.

Event Highlights

 

  • Total Registrations for the event : 1000+
  • Attendees joined the session : 450
  • % of Dev/IT Pro : 75%
  • % of Student partners : 25%
  • Total no of Technical Tracks : 10
  • Hands on Lab conducted : 1

Tracks

 

9:00 TO 9:30 : RECEPTION AND REGISTRATION
Dev Track IT Pro Track
09:30 – 10:00: CHENNAI GLOBAL AZURE BOOTCAMP: KEYNOTE BY BALAJI
10:15 – 11:00: Building Mobile Apps with Visual Studio and Azure Services 10:15 – 11:00: Power of Open Source on Azure (Ruby on Rails)
11:00 – 11:15: Café
11:30 – 12:15: Deep into Azure Machine Learning & Predictive Analytics 11:30 – 12:15: Running LINUX workload on Azure
12:30 – 13:15: 14:00 – 14:45: DevOps In Azure 12:30 – 13:15: Kick off your project for success!
13:15 – 14:00: LUNCH
14:00 – 14:15: Deep dive into Azure SQL Introduction to Data Lake Analytics & Store
15:00 – 15:45: IoT on Azure (Part 1) 15:00 – 15:45: Azure AD with Office 365
15:45 – 16:00: Café
16:15 – 17:00: IoT on Azure (Part 2) Azure Hands on Labs
17:00 – 17:30: AZURE GLOBAL BOOTCAMP 2016 CLOSING

 

Local Sponsors

GAVS Technologies and 8KMiles Software Services are the key local sponsors who helped us to execute such a larger event in Chennai. Infact, this is one of the largest community driven Azure event in the recent past conducted in the city. I’m very thankful for all the sponsors in helping us to execute the event.
Azure Bootcamp Sponsors

Conclusion

Overall, the event went on really well, and a lot of great content was delivered by our awesome experts and MVP speakers. Thanks to all the presenters as well as all attendees! Without you, there wouldn’t have been an event. Also, special thanks goes to the Global Azure Bootcamp team for organizing the global event and getting together the prizes from all the global sponsors.

I had a great time presenting and helping people out with the hands-on labs launching Windows and Linux VMs towards the end of the day. It was a great learning and fun experience. Currently, I’m planning to help coordinate the Chennai Global Azure Bootcamp event next year as long as god will.

Until next year, adiss amigos!

P.S. Please feel free to contact me with any question about Azure or general feedback on the event. You can either submit them to me in the comments on this post, via Twitter @ilyas_tweets or drop an email to me @ ilyas.f@8kmiles.com

 

 

  • April 20, 2016
  • blog

5 Considerations you need to know before investing in Big Data Analytics

A vast number of companies with different industrial background collaborate with data analytics companies to increase operational competence and to make better business decisions. When big data is handled properly they shall lead to immense change in a business. Though data analysis is a powerful tool, most companies are not ready to include data analysis software as their practical source. Purchasing and downloading data analytics isn’t as simple as buying a software. There are many things that must be considered before a company invests in analytics software.

You should know where your company exactly stands in terms of analysis system and consider the following things before investing in big data analytics.

What you want to find out from your data?

You should know for what you will be using your analytics software before investing on them. If you don’t know what business problem you need to solve then collecting data and setting up an analysis system isn’t productive. So check for areas of your company where the current process is not effective. Look out for different questions you need to answer prior to investing in a solution so you can adopt for an appropriate analytics partner.

Do you have enough data to work?

You should have significant and reliable data to perform data analytics. Therefore, you need to see if your company has enough amount of data or workable information to perform analysis. Also you should determine if the company can afford and have ability to collect such information. This process can become expensive considering the labor cost, hours spent on categorizing the information and data storage. So it is also necessary to consider data aggregation and storage cost before moving forward.

Do you have the capital to invest for analytics software?

Depending on companies need the price range for analytics software varies. Few software vendors offer data warehousing, which can be ideal for companies that require data storage and analytics as well as have large budget. Other vendors give visualization systems, both on SaaS and on-premise form. As visualization comes in varied price ranges, your company will be able to find a solution that fits your budget.

Besides the software cost you should estimate the cost of effort and service which is five times the software price. The investment can change depending on the size and depth of the project, but it’s necessary to completely understand the costs involved in data analytics before investing.

Do you have resource to work with your data?

There are many analytics systems that are automatic but you still need user interaction and management. It is necessary to have data engineer for constant data ingestion, organization and provision of data marts for data analysts and data scientists, who in turn will continue to work on new insights and predictions by updating the data processing rules and algorithms/models as per changing business needs. Also having resource for analytical decisions will avoid confusions and that specific person should be able to allot time and materials for scrutinizing and making reports.

Are you capable to take action?

At the final stage, you would have collected data, identified the problem, invested in the software and performed analysis; but to make everything worth you have to be ready to act immediately and efficiently. With the recently discovered data, you have required information to change your organizations practice. Whereas executing a new project could be expensive so it’s essential to be ready with resources necessary for implementing the change.

Data analytics can be a powerful tool to improve a company’s efficiency. So remember to consider these five factors before investing in big data analytics.

Powershell: Automating AWS Security Groups

To provision and manage EC2-Instances in AWS cloud that comply with industry standards and regulations, Individuals administrating that should understand the security mechanisms within AWS framework—both those that are automatic and those that require configuration. Let’s take a look at Security Group which falls under the latter category.

As there is no “Absolute Security Group” which can be plugged in to satisfy the universal need, we should always be open for its modification. Automating so via Powershell will provide predictable/consistent results.

What Is Security Group?

Every VM created through AWS Management Console (or via scripts) can have association with one or multiple Security Groups (in case of VPC it can be up to 5). By default all the inbound and out bound traffic flow at instance level is blocked from elsewhere. We should automate the infrastructure to open only the ports satisfying the customer need. This implies that we should add rules to each Security Group for ingress/ egress as per customer requirement. For more details have a look at AWS Security Group

It is duly important to allow traffic only from valid source IP addresses; this will substantially prune security attack surface, use of 0.0.0.0/0 as IP range makes things vulnerable for sniffing or tampering of infrastructure. Traffic between VMs should always traverse through Security Groups; we can achieve this by allowing initiators Security Group- ID as source.


Automation Script


I have kept this as a single block, if one wishes they can create a function out of it. Few things worth considering:

  • Execution of this script will only materialize given working pair of Secret Key & Access Key
  • This script make use of filtering functionality, whereby it expect end user to provide some Name-Pattern ,selection of Security Group is driven by aforementioned pattern
  • To facilitate the whole operation you have to provide certain parameters i.e.[IpProtocol , FromPort , ToPort , Source]
  • Source parameter can be interpreted in two ways, you can either provide IpRanges in CIDR block format or choose another Security Group as source in the form of UserIdGroupPair

<#

.SYNOPSIS

Simple script to safely assign/revoke Ingress Rules from VPC Security Group .

 

.DESCRIPTION

Script first checks to see what are the rules has beein specified for update,if already assigned will do no harm.

If assginement is successful, same can be verified at AWS console.

 

NOTE:  Script must be updated to include proper pattern, security credentials.

#>

 

# Update the following lines, as needed:

 

Param(

[string]$AccessKeyID=”**********”,

[string]$SecretAccessKeyID=”********”,

[string]$Region=”us-east-1″,

[string]$GrpNamePattern=”*vpc-sg-pup_winC*”,

[string]$GroupId=”sg-xxxxxxxx”,

[string]$CidrIp=”0.0.0.0/0″,

[switch]$SetAws=$true,

[switch]$Revoke,

[switch]$Rdp=$true,

[switch]$MsSql=$true

)

$InfoObject = New-Object PSObject -Property @{

AccessKey = $AccessKeyID

SecretKey = $SecretAccessKeyID

Region=$Region

GrpNamePattern = $GrpNamePattern

GroupId=$GroupId

CidrIp=$CidrIp

}

if($SetAws)

{

Set-AWSCredentials -AccessKey $InfoObject.AccessKey  -SecretKey $InfoObject.SecretKey

Set-DefaultAWSRegion -Region $region

}

$PublicGroup = New-Object Amazon.EC2.Model.UserIdGroupPair

$PublicGroup.GroupId= $InfoObject.GroupId

 

$filter_platform = New-Object Amazon.EC2.Model.Filter -Property @{Name = “group-name”; Values = $InfoObject.GrpNamePattern}

$SG_Details=Get-EC2SecurityGroup -Filter $filter_platform |SELECT GroupId, GroupName

 

$rdpPermission = New-Object Amazon.EC2.Model.IpPermission -Property @{IpProtocol=”tcp”;FromPort=3389;ToPort=3389;UserIdGroupPair=$PublicGroup}

 

$mssqlPermission = New-Object Amazon.EC2.Model.IpPermission -Property @{IpProtocol=”tcp”;FromPort=1433;ToPort=1433;IpRanges=$InfoObject.CidrIp}

$permissionSet = New-Object System.Collections.ArrayList

 

if($Rdp){ [void]$permissionSet.Add($rdpPermission) }

 

if($MsSql){ [void]$permissionSet.Add($mssqlPermission) }

 

if($permissionSet.Count -gt 0)

{

try{

if(!$Revoke){

“Granting to $($SG_Details.GroupName)”

Grant-EC2SecurityGroupIngress -GroupId $SG_Details.GroupId -IpPermissions $permissionSet

}

else{

“Revoking to $($SG_Details.GroupName)”

Revoke-EC2SecurityGroupIngress -GroupId $SG_Details.GroupId -IpPermissions $permissionSet

}

}

catch{

if($Revoke){

Write-Warning “Could not revoke permission to $($SG_Details.GroupName)”

}

else{

Write-Warning “Could not grant permission to $($SG_Details.GroupName)”

}

}

}

 

 

What we are looking at being able to automate Creation/updation of Security Group. Use this script in case you ran into frequent changing of Security Groups.


P.S. This script has been written keeping VPC  in mind, Different parameter usage between VPC and EC2 security groups should be take care of.

 

Credits -“Utkarsh Pandey”

  • April 19, 2016
  • blog

DevOps with Windows – Chocolatey

Conceptually Package manager is well understood space for someone having slightest understanding of how *nix environment get’s managed, but when it comes to windows it was untracked space till recently. This was piece of stack which was ironically missing for so long that once you get hands on you will feel how on earth you were living without it. NuGet and Chocolatey are the two buzz words making lots of noises and deemed as the future for windows server management.

What Is Chocolatey?

Chocolatey builds on top of NuGet packaging format to provide a package management  for Microsoft Windows applications , Chocolatey is  kind of yum or apt-get but for windows. Its CLI based and can be used to decentralize packaging.  It has a central repository located at http://chocolatey.org/.

If you have ever used windows build in provider you probably be aware of the issues it has. It doesn’t really do versioning and seems misfit for upgrading. Any organization looking for long term solution to ensure that latest versions are always installed for them build in package provider may not be the recommended option .Chocolatey takes care of all this with very little effort. In contrast to default provider which has no dependency Chocolatey requires your machine to have Powershell 2.0 & .Net framework 4.0 installed. Installation of packages from Chocolatey is one command line that reaches out to internet and pulls it down. That would be version-able and upgradable; you can specify this version of package and that that’s what gets installed.

Recommended way of Chocolatey installation is by executing PowerShell script.

Chocolatey With AWS

AWS offers windows instances with both their offering; under IAAS you can launch windows instance as EC2, whereas with PAAS you can get that via Elastic beanstalk.

Using Cloud Formation:

Using ‘cfn-init’ AWS Cloud Formation supports the download of files and execution of commands on Windows EC2 instance. Bootstrapping of Windows instance using Cloud Formation is lot simpler than any other ways. We can leverage this offering to install Chocolatey while launching the server using CFT. While doing this using Cloud formation we have to execute PowerShell.exe and provide the install command to that. One thing to be take care of that Chocolatey installer and the packages it installs may modify the machine’s PATH environment variable. This adds complexity since subsequent commands after these installations are executed in the same session, which does not have the updated PATH. To overcome this, we utilize a command file to set the session’s PATH to that of the machine before it executes our command. We will create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH, and then we will proceed with Chocolatey and any other installation. With below sample we will be installing Chocolatey and then install Firefox with Chocolatey as provider.

“AWS::CloudFormation::Init”: {

“config”: {

“files” : {

“c:/tools/ewmp.cmd” : {

“content”: “@ECHO OFF\nFOR /F \”tokens=3,*\” %%a IN (‘REG QUERY \”HKLM\\System\\CurrentControlSet\\Control\\Session Manager\\Environment\” /v PATH’) DO PATH %%a%%b\n%*”

}

},

“commands” : {

“1-install-chocolatey” : {

“command” : “powershell -NoProfile -ExecutionPolicy unrestricted -Command \”Invoke-Expression ((New-Object Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))\””

},

“2-install-firefox” : {

“command” : “c:\\tools\\ewmp choco install firefox”

}

}

}

}

 

Using AWS Elastic Beanstalk:

AWS Elastic Beanstalk supports the downloading of files and execution of commands on instance creation using container customization. We can leverage this feature to install Chocolatey.

The aforementioned installation can get translated into AWS Elastic Beanstalk config files to enable use of Chocolatey in Elastic Beanstalk. The change while doing using Elastic Beanstalk; we will create YAML .config files inside the .ebextensions folder of our source bundle.

files:

c:/tools/ewmp.cmd:

content: |

@ECHO OFF

FOR /F “tokens=3,*” %%a IN (‘REG QUERY “HKLM\System\CurrentControlSet\Control\Session Manager\Environment” /v PATH’) DO PATH %%a%%b

%*

commands:

1-install-chocolatey:

command: powershell -NoProfile -ExecutionPolicy unrestricted -Command “Invoke-Expression ((New-Object Net.WebClient).DownloadString(‘https://chocolatey.org/install.ps1’))”

2-install-firefox:

command: c:\tools\ewmp choco install firefox

 

Above will work in the same way as cloud formation sample did , it will Create a command file ‘ewmp.cmd’ to execute a command with the machine’s PATH before installing Chocolatey and Firefox.

 

P.S. Chocolatey can best be used as package provider for puppet on windows. Puppet offers great support in promotion and execution of Chocolatey on Windows. 

 

 

Credits – Utkarsh Pandey

 

  • April 19, 2016
  • blog

Top Health IT Issues You Should Be Aware Of

Information Technology (IT) as a major role in refining the facilities of healthcare industry to improve patient care and organize vast quantity of health related data. Over several years, the healthcare across country has seen remarkable growth with the help of IT. So both the public and private healthcare sectors are making use of IT to meet their new requirements and standards. Though IT is playing an important role to improve excellence in patient care, increase efficiency and reduce cost, there are certain Health It issues that you should to be aware of and should fix them:

Database

New database and related tools are needed to manage huge amount of data and improve patient care. So, using non-relational database will help to manage and make proper use of vast amount of healthcare data. This database type is perfect for information that is structured easily but they can’t handle unstructured data (like records, clinical notes, etc.). But relational databases (like electronic health records (EHR)) organize data into tables or rows or force information into predefined groups. However, with non-relational databases it is easy to analyze different data forms and avoid rigid structure.

Mobile Healthcare and Data Security

With change in financial incentives and growth of mobile healthcare technology, the patient care would shift to the consumer. Thus, providing care is easy from anywhere and anytime with mobility. Also to reduce the money spent on health plans, additional tools are allied for wellness and disease management programs. However, cyber security issues are the biggest threat. As breaching the data would cost huge financial loss. It is necessary to take action to prevent breaches as this is a major issue.

With increase in mobility of healthcare, it is must to introduce a mobile/BYOD that would help avoid data breaching and privacy intrusion.

Health Information Exchange (HIE)

The HIE will help sharing of healthcare data between healthcare organizations. Different concerns related to healthcare policy and standard should be analyzed before implementing such exchanges as sensitive data is at risk.

Wireless Network and Telemedicine

Wireless networking is mandatory for the employees of healthcare industry to avail the medical facilities. To transfer the old health IT services to adopt wireless access could be an expensive and challenging option due to structural limitations. Also the wireless issue continues to be an obstacle for telemedicine adoption. The varying state policies on telemedicine use and reimbursement continue to restrict this emerging technique.

Data analysis

It owns a major role in assisting, treating and preventing illness and providing quality care to people. To implement a data analysis system which offers secure data storage and easy access would be an expensive and robust task.

Cloud System

The cloud system is answerable to many questions with respect to data ownership, security and encryption. To fix the cloud related issues, some providers are experimenting with cloud-based EHR systems while others build their own private cloud.

The necessity and requirement of health IT is increasing every day. Though health IT has become a major phenomenon, we should always remember that challenges would continue to intrude as they progress. So be aware and keep yourself updated with the top health IT issues and tackle them.

Related post from 8KMiles

How Cloud Computing Can Address Healthcare Industry Challenges

How pharmaceuticals are securely embracing the cloud

5 Reasons Why Pharmaceutical Company Needs to Migrate to the Cloud

8K Miles Tweet Chat 2: Azure

If you missed our latest Twitter chat on Azure or wish to once again go through the chat, this is the right place! Here’s a recap on what happened during the 12th April Tweet chat, as a compilation of all the questions asked and answers as given by the tweet chat participants. The official tweet chat handle of 8K Miles being @8KMilesChat shared frequently asked questions (FAQs) related to Azure and here’s how they were answered.

1

2

3

4

5

6

twitter chat

7-2 7-3

7-4

 

8-18-28-38-4

 

9

10

 

We received clear answers to every question asked and it was an informative chat on Azure. For more such tweet chats on cloud industry follow our Twitter handle @8KMiles.

The active participants during the tweet chat were cloud experts Utkarsh Pandey and Harish CP. Here’s a small brief on their expertise:

Utkarsh Pandey

Utkarsh, is a Solutions Architect who in his current role as AWS & Azure Certified solution architect holds the responsibility of cloud development services.

HarishCP

HarishCP, is a Cloud Engineer. He works in the Cloud Infrastructure team helping customers in Infrastructure management and migration.

Powershell: Automating AWS Security Groups

Powershell: Automating AWS Security Groups

To provision and manage EC2-Instances in AWS cloud that comply with industry standards and regulations, Individuals administrating that should understand the security mechanisms within AWS framework—both those that are automatic and those that require configuration.Let’s take a look at Security Group which falls under the latter category.

As there is no “Absolute Security Group” which can be plugged in to satisfy the universal need, we should always be open for its modification.Automating so via Powershell will provide predictable/consistent results.

What Is Security Group?

Every VM created through AWS Management Console (or via scripts) can have association with one or multiple Security Groups (in case of VPC it can be up to 5). By default all the inbound and out bound traffic flow at instance level is blocked from elsewhere. We should automate the infrastructure to open only the ports satisfying the customer need. This implies that we should add rules to each Security Group for ingress/ egress as per customer requirement.For more details have a look at AWS Security Group

It is duly important to allow traffic only from valid source IP addresses; this will substantially prune security attack surface, use of 0.0.0.0/0 as IP range makes things vulnerable for sniffing or tampering of infrastructure. Traffic between VMs should always traverses through Security Groups, we can achieve this by allowing initiators Security Group- ID as source.

Automation Script

I have kept this as a single block ,if one wishes they can create a function out of it. few things worth considering :

  • Execution of this script will only materialize given working pair of Secret Key & Access Key
  • This script make use of filtering functionality, whereby it expect end user to provide some Name-Pattern ,selection of Security Group is driven by aforementioned pattern
  • To facilitate the whole operation you have to provide certain parameters i.e.[IpProtocol , FromPort , ToPort , Source]
  • Source parameter can be interpreted in two ways, you can either provide IpRanges in CIDR block format or choose another Security Group as source in the from of UserIdGroupPair

<#

.SYNOPSIS

Simple script to safely assign/revoke Ingress Rules from VPC Security Group .

 

.DESCRIPTION

Script first checks to see what are the rules has beein specified for update,if already assigned will do no harm.

If assginement is successful, same can be verified at AWS console.

 

NOTE:  Script must be updated to include proper pattern, security credentials.

#>

# Update the following lines, as needed:

Param(

[string]$AccessKeyID=”**********”,

[string]$SecretAccessKeyID=”********”,

[string]$Region=”us-east-1″,

[string]$GrpNamePattern=”*vpc-sg-pup_winC*”,

[string]$GroupId=”sg-xxxxxxxx”,

[string]$CidrIp=”0.0.0.0/0″,

[switch]$SetAws=$true,

[switch]$Revoke,

[switch]$Rdp=$true,

[switch]$MsSql=$true

)

$InfoObject = New-Object PSObject -Property @{

AccessKey = $AccessKeyID

SecretKey = $SecretAccessKeyID

Region=$Region

GrpNamePattern = $GrpNamePattern

GroupId=$GroupId

CidrIp=$CidrIp

}

if($SetAws)

{

Set-AWSCredentials -AccessKey $InfoObject.AccessKey  -SecretKey $InfoObject.SecretKey

Set-DefaultAWSRegion -Region $region

}

$PublicGroup = New-Object Amazon.EC2.Model.UserIdGroupPair

$PublicGroup.GroupId= $InfoObject.GroupId

$filter_platform = New-Object Amazon.EC2.Model.Filter -Property @{Name = “group-name”; Values = $InfoObject.GrpNamePattern}

$SG_Details=Get-EC2SecurityGroup -Filter $filter_platform |SELECT GroupId, GroupName

$rdpPermission = New-Object Amazon.EC2.Model.IpPermission -Property @{IpProtocol=”tcp”;FromPort=3389;ToPort=3389;UserIdGroupPair=$PublicGroup}

$mssqlPermission = New-Object Amazon.EC2.Model.IpPermission -Property @{IpProtocol=”tcp”;FromPort=1433;ToPort=1433;IpRanges=$InfoObject.CidrIp}

$permissionSet = New-Object System.Collections.ArrayList

if($Rdp){ [void]$permissionSet.Add($rdpPermission) }

if($MsSql){ [void]$permissionSet.Add($mssqlPermission) }

if($permissionSet.Count -gt 0)

{

try{

if(!$Revoke){

“Granting to $($SG_Details.GroupName)”

Grant-EC2SecurityGroupIngress -GroupId $SG_Details.GroupId -IpPermissions $permissionSet

}

else{

“Revoking to $($SG_Details.GroupName)”

Revoke-EC2SecurityGroupIngress -GroupId $SG_Details.GroupId -IpPermissions $permissionSet

}

}

catch{

if($Revoke){

Write-Warning “Could not revoke permission to $($SG_Details.GroupName)”

}

else{

Write-Warning “Could not grant permission to $($SG_Details.GroupName)”

}

}

}

what we are looking at being able to automate Creation/updation of Security Group.Use this script in case you ran into frequent changing of Security Groups.

 

Credits -Uthkarsh Pandey

  • April 13, 2016
  • blog

Puppet – An Introduction

Puppet – An Introduction

Most common issue while building and maintaining large infrastructure has always been wastage of time. Amount of redundant work performed by each member within team is significant. The idea of automatically configuring and deploying infrastructures has evolved out of a wider need to address this particular problem.

Puppet and Chef are few among the many configuration management packages available. They offer a framework for describing your application/server configuration in a text-based format. Instead of manually installing IIS on each of your web servers, you can instead write a configuration file which says “all web servers must have IIS installed”.

What Is Puppet ?

Puppet is Ruby -based configuration management software, and it can run in either client-server or stand-alone mode. It can be used to manage configuration on UNIX (including OS X), Linux, and Microsoft Windows platforms. It is designed to interact with your hosts in continuous fashion,Unlike other provisioning tools that build your hosts and leave them on their own.

You define a “Desired State” for every node (agents) on puppet master. If agent node doesn’t resemble desired state, in puppet terms “drift” has occurred. Actual decision on how your machine is suppose to look is done by the master, whereas agents only provides data about itself and then responsible for actually applying those decisions. By default each agent will contact master every 30 min, which can be customized. The way this entire process work can be summed with this workflow.

PL_dataflow_notitle

  1. Each nodes sends its current information (current state) in the form of facts.
  2. Puppet master will use these facts and compile a catalog about desired state of that agent, and send it back to agent.
  3. Agent will enforce the configuration as specified in catalog, and send the report back to master to indicate the success/failure.
  4. Puppet Master will generate the detailed report which can be feed to any third party tool for monitoring.

Credits -Uthkarsh pandey

  • April 13, 2016
  • blog

Meet 8K Miles Cloud Experts at Bio-IT World Conference & Expo ‘16

The Annual Bio-IT World Conference & Expo is around the corner! The Cambridge Healthtech’s 2016 Bio-IT World Conference and Expo is happening at Seaport World Trade Centre, Boston, MA and 8K Miles will be attending and presenting in the event. The three day spanning meet from April 5th -7th includes 13 parallel conference tracks and 16 pre-conference workshops.

 The Bio-IT World Conference & Expo continues to be a vibrant event every year that unites 3,000+ life sciences, pharmaceutical, clinical, healthcare, and IT professionals from more than 30 countries. At the conference look forward to compelling talks, including best practice case studies and joint partner presentations, which will feature over 260 of your fellow industry and academic colleagues discussing themes of big data, smart data, cloud computing, trends in IT infrastructure, genomics technologies, high-performance computing, data analytics, open source and precision medicine, from the research realm to the clinical arena.

When it comes to the Cloud Healthcare, Pharmaceutical and Life Sciences have special needs. 8K Miles makes it stress-free for your organization to embrace the Cloud and reap all the benefit the Cloud offers while at the same time meeting your security and compliance needs. Stop by booth #128 at the event to meet our 8K Miles, Director of Sales, Tom Crowley who is a versatile; goal oriented sales, business development and marketing professional with 20+ years of wide variety of experience and accomplishments in the information security industry.

Also, at the event on Wednesday, April 6th from 10:20-10:40am two of our 8K Miles speakers Sudish Mogli, Vice President, Engineering and Saravana Sundar Selvatharasu, AVP, Life Sciences will be presenting on Architecting your GxP Cloud for Transformation and Innovation. They will be sharing solutions and case studies for designing and operating on the cloud for a GxP Environment, by utilizing a set of frameworks that encompasses Operations, Automation, Security and Analytics.

We are just a tweet away, to schedule a one-on-one meeting with us, tweet to @8KMiles! We look forward to meeting you at the event!

How Cloud Computing Can Address Healthcare Industry Challenges

Healthcare & Cloud Computing

The sustainability and welfare of mankind depends on the healthcare industry. Whereas the technologies aren’t utilized enough in the healthcare industry thus restricts the healthcare sector in the operational competence. There are still healthcare sectors which depend on paper records. As well as there are healthcare sectors that has digitized their information. The use of technology will help to coordinate care and ease between patients and physicians, in the midst of the medical community.

Cloud computing is adopted globally to reform and modernize the healthcare sector. The healthcare industry is shifted into a model which helps to collectively support and coordinate the workflows and medical information. Cloud computing helps healthcare industry in storing large data, facilitates sharing of information among physicians and hospitals and increases the data analysis or tracking features. This helps with the treatments, performance of physicians or students, costs and studies.

Overcome Challenges in Healthcare Industry through Cloud Computing

In the healthcare industry, the utmost importance should be given to the following: security, confidentiality, availability of data to users, long-term preservation, data traceability and data reversibility. Some challenges faced by the healthcare industry in IT systems are with respect to exchange, maintenance or making use of huge information. Hence, while moving healthcare information into cloud computing, a careful thought should be given to the type of application i.e., clinical and nonclinical application the organization wants to go with.

So, while moving the application into cloud deployment model, details, such as security, privacy and application requirements should be considered while setting up the healthcare digitally is required. The cloud services can be public, private or hybrid. For a clinical application, the cloud deployment will take place in private or hybrid cloud as they require the highest level of precautions. The nonclinical application will fit under public cloud deployment model. 

Cloud computing is emerging as a vital technology in healthcare industry but still they are underutilized. The persons involved in the healthcare, like medical practitioners, hospitals, research facilities, etc., could consider different cloud service models that could address their business needs. The service models includes Software as a Service (SaaS), Infrastructure as a Service (IaaS) or Platform as a Service (PaaS).

Among the three service models, SaaS, a pay-per-use business model is the most attractive option economically. Especially for the small hospitals or physicians as SaaS doesn’t need full-time IT personnel as well reduces the capital expenses needed for hardware, software or operating system.

PaaS, is a perfect option for large-scale healthcare institutions who have the resources to develop the cloud solutions further. IaaS will be feasible for healthcare industry that could seek more scalable infrastructure. As IaaS is cost-effective as well provides scalability with security, flexibility, data protection and back-ups.

Thus, cloud computing could be a permanent solution or game-changer for a healthcare industry; with respect to its service offerings, operating models, capabilities and end-user services. With cloud computing, the challenges faced in the healthcare industry with respect to managing the medical information, storing data, retrieving data or accessing could be eliminated. Meanwhile the healthcare industry can overtake other industries in use of technology with adoption of cloud services. Thus, accessing or monitoring the healthcare related information across the globe would be easier with implementation of cloud services.

Related post from 8KMiles…
How pharmaceuticals are securely embracing the cloud

Keeping watch on AWS root user activity is normal or anomaly

Avoid malicious cloud trial action in your AWS account cloud watch lamda

27 Best practice tips on amazon web services security groups

8K Miles Tweet Chat : AWS Key Management Service (KMS)

Follows us at Twitter @8KMiles

Did you miss the tweet chat on AWS KMS organised 8K Miles?  Well, if you have missed it or if you wish to relearn of what had happened on 12th February’s tweet chat, this is the place. Here is a compilation of all the questions asked and answers given by the tweet chat participants.

The official tweet chat handle of 8K Miles being @8KMilesChat shared questions which are generally asked related to AWS KMS and here’s how they were answered.

AWS KMS

 

AWS kms

 AWS kms

a4

AWS kms

amazon kms

AWS kms

AWS kms

AWS kms

a10

This is how informative our tweet chat was last time. For more such chats stay tuned to our page updates.

A brief summary about our cloud experts

Ramprasad

Ramprasad, is a Solutions Architect with 8KMiles. In the current role as a certified Solutions Architect in AWS, he works in the Cloud Infrastructure team at 8KMiles helping customers evaluate the cloud platform, suggest and implement the right set of solutions and services that AWS offers.

Senthilkumar

Senthilkumar, is a Senior Cloud Engineer with 8KMiles. He is a certified Solutions Architect in AWS, he works in the Cloud Infrastructure team at 8KMiles helping customers in DevSecOps, Operational Excellence and Implementation.

Related post from 8KMiles….
How pharmaceuticals are securely embracing the cloud

Keeping watch on AWS root user activity is normal or anomaly

Avoid malicious cloud trial action in your AWS account cloud watch lamda

27 Best practice tips on amazon web services security groups