What is the best file permission for wordpress?

chown www-data:www-data  -R * # Let Apache be owner
find . -type d -exec chmod 755 {} \;  # Change directory permissions rwxr-xr-x
find . -type f -exec chmod 644 {} \;  # Change file permissions rw-r--r--

After the setup you should tighten the access rights, according to Hardening WordPress all files except for wp-content should be writable by your user account only. wp-content must be writable by www-data too.

chown <username>:<username>  -R * # Let your useraccount be owner
chown www-data:www-data wp-content # Let apache be owner of wp-content

Maybe you want to change the contents in wp-content later on. In this case you could

  • temporarily change to the user to www-data with su,
  • give wp-content group write access 775 and join the group www-data or
  • give your user the access rights to the folder using ACL.

Whatever you do, make sure the files have rw permissions for www-data.

AWS Lambda -Swiss knife of AWS Platform

AWS Lambda can be used in a number of different scenarios, it is easy to get started with, and can run with minimal operating costs. One of the reasons behind its popularity is the flexibility it comes for developers and cloud architects.

The service performs all the operational and administrative activities on your behalf, including capacity provisioning, monitoring fleet health, applying security patches to the underlying compute resources, deploying your code, running a web service front end, and monitoring and logging your code.

Lambda has “pay as you go (PAYG)” pricing model with a generous free tier, and it is one of the most appealing for cost savings. Lambda billing is based on used memory, number of requests and execution duration rounded up to the nearest 100 milliseconds. This is a huge leap comparing to the hourly based billing of EC2.

Lambda can instantly scale up to a large number of parallel executions, for which the limit is controlled by the Number of concurrent requests. Downscaling is handled simply by automatically terminating the Lambda function execution when do code finishes running.

Lambda comes with native support for a number of programming languages: Java, NodeJS and Python. There are additional open source frameworks for more supported languages, rapid development and deployments, like Serverless (formerly knows as JAWS).

Use of Lambda:

Operating serverless websites

This is one of the killer use cases to take advantage of the pricing model of Lambda, and S3 hosted static websites. Consider hosting the web frontend on S3, and accelerating content delivery with Cloudfrontcaching. The web frontend can send requests to Lambda functions via API Gateway HTTPS endpoints. Lambda can handle the application logic, and persist data to a fully managed database service (RDS for relational, or DynamoDB for non relational database).

You can easily build a Lambda function to check log files from Cloudtrail or Cloudwatch. Lambda can search in the logs looking for specific events or log entries as they occur and send out notifications viaSNS. You can also easily implement custom notification hooks to Slack, Zendesk, or other systems by calling their API endpoint within Lambda.
Automated backups and everyday tasks

Scheduled Lambda events are great for housekeeping within the AWS accounts. Creating backups, checking for idle resources, generating reports and other tasks which frequently occur can be implemented in no time using the boto3 Python libraries.

Processing uploaded S3 objects

By using S3 object event notifications, you can immediately start processing your files by Lambda, once they land in S3 buckets. Image thumbnail generation by Lambda is a great example for this use case, so the solution will be cost effective and you don’t need to worry about scaling up – Lambda will handle any load!

Filtering and transforming data on the fly

Because Lambda is highly scalable, it is great for transitioning data between S3, Redshift, Kinesis and database services, and filtering on the fly. You can easily place Lambda in between services to transform and load data as required.


Lambda comes with a number of “limitations”, which is good to keep in mind when architecting a solution.

There are some “hard limitations” for the runtime environment: the disk space is limited to 512 MB, memory can vary from 128 to 1536 MB and the execution timeout for a function is maximised in 5 minutes. Package constraints like size of deployment package and number of file descriptors are also defined as hard limits.

Similarly, there are “limitations” for the requests served by Lambda: request and response body payload size is maximised in 6 MB while event request body can be 128 KB.


If your Lambda function would be running for hours, it should rather go to Beanstalk or EC2 than Lambda.


Lambda is one of the versatile tools in the AWS ecosystem and can be used for many use cases. However, as controlling a powerful highly scalable service, things can go horribly wrong if functions not implemented well, so always make sure you have architected and tested thoroughly before publishing your functions live.

How to setup SFTP server on Amazon EC2 using Ubuntu 16.04

Let us see how we can setup the SFTP server on Amazon EC2 using Ubuntu.

First of all, we would need to launch EC2 instance with Ubuntu (16.04) OS. Once we have the EC2 instance, follow the steps to setup the SFTP Server.

Login into the machine using its public/elastic IP. ‘ubuntu’ is default username for Ubuntu

login as: ubuntu

then promote yourself to the root user so you will get all privileges!

sudo –i

Update all the packages available

apt-get update –y

Install vsftpd package

apt-get install vsftpd

Add a user and set its password

adduser salim

salim is desired username.

Make .ssh directory in User’s Home Directory. This directory will help us to login into the server using a private key.

mkdir /home/salim/.ssh

Create private and public key for the user. You can either use KeyGen or PuttyGen(for Windows)

Let us create key pair using KeyGen. First go to the .ssh directory which we have created recently.

cd /home/salim/.ssh

Generate the Key-Pair

ssh-keygen -t rsa

Copy the content of public key (file with .pub extension) into the authorized_keys which should be located inside the .ssh directory

cat id_rsa.pub

Copy the content displayed on the shell

vim authorized_keys

Paste the copied content in authorized_keys

Save and Close the file.

Now change the file permissions and the ownership

chmod 700 /home/salim/.ssh

chmod 600 /home/salim/.ssh/authorized_keys

chown -R salim:salim /home/salim/.ssh

Copy the private key to your machine from where you want to access the SFTP. You will need to convert the key to .ppk extension if you want to access the SFTP from Windows.

Now we have the user and it’s private key. Test the connection with server. You can use putty if you are using Windows.

Next, you should be able to access the server but that’s not all, real thing is coming up next!

Now we have to jail the user to specific directory and we should restrict it’s shell access so that user can’t access the command shell of the server.

First of all, create a group for SFTP users

groupadd sftpusers

then, add our user into that group

adduser salim sftpusers

Now salim is the member of the sftpusers group

Create A SFTP directory and change the permissions

mkdir /sftp

chmod 755 /sftp

chown root:sftpusers /sftp

Create a directory inside ‘sftp’ for example, we are going to create directory ‘shared’ to share the data among several users

mkdir /sftp/shared

chown root:sftpusers /sftp/shared

Change the permissions of ‘shared’ directory so that only users of sftpusers group can see and modify the data inside the ‘shared’ directory

We have to modify sshd_config to specify the SFTP directory and jail user into that directory

vi /etc/ssh/sshd_config

We have to replace the Subsystem Line. Comment following line:

Subsystem sftp /usr/lib/openssh/sftp-server

So it should look like

#Subsystem sftp /usr/lib/openssh/sftp-server

And add following line:

Subsystem sftp internal-sftp

Add following lines at bottom of file. It should be below ‘UsePAM yes’

Match group sftpusers

ChrootDirectory /sftp/

X11Forwarding no

AllowTcpForwarding no

ForceCommand internal-sftp

Save and Close the file.

Switch the ownership of user’s home directory to root user without changing the ownership of .ssh directory which will be used to verify the Key

chown root:root /home/salim

chown -R salim:salim/home/salim/.ssh

Restart SSH

/etc/init.d/ssh restart

Mission accomplished! Test your SFTP connection using SFTP tools like WinSCP. Check if you have jailed the user and blocked it’s shell access. User should not be able to access the shell of the server.

Now before connecting check which port FTP is listening by below commands:

sudo netstat -tulpn | grep ftp

Amazon AWS Service Overview

I am using various hosting services since 2004 and since then there is a huge changes in the hosting industry. Due to demand of application hosting I was curious to find flexible platform with budget. Back in 2013 I used Amazon for the first time then move to Digital Ocean. I also tried Rackspace and Google cloud and now slowly shifting back to Amazon AWS.

Here is a overview of the huge growth of Amazon AWS and some of its super cool features. This is a list with brief and in future I might write in details of any specific services.



Elastic Computing or Cloud computing where you can get virtual machines or dedicated machines

Ec2 Container Service:

You can use Docker container service for multiple instances

Elastic Beanstalk :

Very useful if you are a developer and want to deploy codes without much going details about architectures


The entire Amazon service runs on Lambda, Essential service


For novice user to start with -you can manage using management console


Its for batch computing if you want to do batch computing on cloud


S3: Simple Storage Service -3s

it has bucket where you can upload files


Elastic file system -you can attached to networks or instances


Is your data archieve


To bring large amount of data I.e ISP, Telephony etc

Storage Gateaway:

Virtaul machines install in your headoffice which can replicate AWS system



all relational database system such as MYSQL , Aurora (aws version of mysql),

Dynamo DB:

Non relational database


Caching database to improve performance

Red Shift:

For Datawarehousing service


Aws Migration hub:

Tracking service and integration service

Application discovery service:

To test dependency and tracking for application

Database Migration services:

Migration service from Onsite to cloud

Server Migration service:

similar service for migrating local server to cloud


Weird service , used for large amount of data into cloud.


VPC: (Amazon Virtual Private cloud)

Virtual data center, u can configure firewall, zone, route table etc -little bit complicated but essential to know inside out. Once started its easy. It is such fundamental part of AWS

Cloud front:

Amazon content delivery center, such as video file, Image files etc, perform caching service using edge location

Route 53:

Amazon DNS service -useful for me

API Gateaway:

Creating your own api for other AWS services

Direct Connect:

Way of running a dedicated line from your headoffice to Amazon

Developer Tools:


Codestar a way of getting developers into one team, project managed, collaboration code with team


Repository service, source control, versioning etc


Compile, test and build packages for deployment

code deploy:

Can easy to deploy to ec2


Contineous delivery service


Debug and analyze server performance

Cloud 9:

IDE environment, Code environment where you can write and compile code .

Management tools:


Bread and butter for solution architects

cloud formation:

Essential -useful to scrip infrastructure. Essential DevOPS tool. Turnign infrastructure into code

Cloud trail:

Logs API activities of AWS environments. Default enabled hold for 1 week. Should be reviewed.


Monitor the entire AWS environments.


Similar to elasticbeans. Use chef or puppet

Service catalog:

Managed catalog of it services for large corporations from anything to virtual machine images etc for government or compliance requirements

System Manager:

Use fo EC2, security patches , easier to use system manager, group all your resource etc

Trusted Advisor:

Give you advice or recommendation such as security issues, resource usage, how resource saver etc

Managed Services:

Premium service by AWS team

Media Services

Elastic Transcoder:

Record video and upload -its resize for all platform.

Media Convers:

File based broadcasting -multiscreen

Media Live:

High quality video streaming for various devices

Media Package:

Protect Medias

Media store:

Consistency , live or on demand content

Media Tailor:

Target video stream without sacrificing quality

Machine Learning:

Sage maker:

Help for deep learning


Sentiment Analysis on content


AI Cameras -learning detect recognition by creating apps buying device from Amazon


Power Alexa service, communicating with customer -chatbot etc

Machine Learning:

Predictive Analysis, data sets analysis-amazon machine learning tools


Text to speech service, choose region, accent etc -will use polly to convert into mp3 files and stream to echo and have Alexa readout. Cool service


Video and images reader – AI will tell you whether its a dog or cat


Machine translate English to other languages


Trascribe our courses into caption, automatic speech recognition to text, convert into different languages


Ethena : Run SQL queries on S3 bucket. Queries to object readings

EMR: Map Reduce -big data speciality -processing large amount of data. Chops data for analysis

cloud search: Search services by AWS

Elastic Search: Search services by AWS

Kinesis: Huge topic for Big Data and will cover details infuture

Kinesis Video Streams: Brand new service in 2018, it is way of large amount of data into AWS, like social media feed or tweet -ingest this and process various processes

Quick Sight: New 2016, Amazon BI Tools. Fantastic tools.

Data Pipeline:

Moving data into different Aws services

Glue: Amazon ETL Services for large data -Extract, transform and load. Can use migration. New 2018

Security & Identify & Compliance:

IAM: Very Useful and first service you need to know inside and out.

Cognito: Device authentication. You can use authentication using mobile phone, facebook, linkedin etc -entry access to AWS

Guard Duty: Monitor malicious activities in AWS -brand new service

Inspect:Test instances, schedule & run -produce severability list

Macie: Check S3 for privacy, data security and alert you

Certificate Manager: Give u free SSL certificate if you register domain using Route73

CloudHSM: Hardware security module. Store private or public keys or other encryption keys

HSM: Accessible and charge 1.20$ an hour

Directory services: Integrating Microsoft Active directory services to Amazon services

WAF: Web application Firewall -stop attacks -application layer security -stop sql injection etc

AWS Sheild: default for load balancer, AWS works, etc for DDOS sheild

Sheild: About 3000$ a month service

Artificat: On demand access to compliance report. Download SOC , PCI , Payment Card reports etc

Mobile Services:

Mobile hub: Mobile SDK to new AWS backend

Pin point: Push notification drive to ur mobile users. Location tracking. NEW 2018

AWS Appsync: Update offline users

Device Farm: Testing Apps on real life

Mobile Analytics: Analytics service for mobile

AR/VR :Augmented and virtual reality


Invented in 2017. Sumerian is useful for application design for AR/VR to create this environment.

Cool thing you dont need to think about code but you can create your own virtual world.

Application integration:

Step functions: For Lambda features for integration

Amazon MQ: Message Que, in 2018






= Launched in 2006. SNS notification service for billing etc, SQS -Decuplling your infrastructure. EC2 queing service

SWF: Workflow service -simple workflow job. Can get human interaction of this.

Customer Engagement:

Connect: Cool -contact center as a service. Service -dynamic , natural or personal configuration

Simple Email service: Bulk email service. Customisable

Business Connectivity:

Alexa for Business: Seriously cool – you can dial into meeting, call tech support, order printers etc

Chime: Video coneference -similar to google hangouts etc -require low bandwidth

Work Docs: Dropbox for Amazon

Workmail: Amazon version of office 365

Desktop and Apps Streaming:

Workspaces –

Appsstream -streaming applications. Similar to citrix



iOT Device Management —

Amazon Free RTOS: OS for micro controllers by Amazon launched in 2017

Greengrass: Secure way machine connectivities for Machine learnings

Game Development

Game lift: Service u can develop games including virtual reality launched in 2016

Amazon EC2 CodeDeploy GitHub -Part 2

Part 2:

On Part 1 -mentioned that we will create another role on IAM Console on this part before creating an application lets create the role now for

Get the Service Role ARN (Console)

To use the IAM console to get the ARN of the service role:

Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.

In the navigation pane, choose Roles.

In the Filter box, type CodeDeployServiceRole, and then press Enter.

Choose CodeDeployServiceRole.

Make a note of the value of the Role ARN field.


Create an application and deployment group (console)

Sign in to the AWS Management Console and open the AWS CodeDeploy console at https://console.aws.amazon.com/codedeploy.


Sign in with the same account or IAM user information you used in Getting Started with AWS CodeDeploy.

On the Applications page, choose Create application.


If you haven’t created any applications yet and the AWS CodeDeploy start page is displayed, choose Get Started Now, complete a deployment using the Sample deployment wizard, and then return to this topic.

In the Application name box, type CodeDeployGitHubDemo-App.

In the Deployment group name box, type CodeDeployGitHubDemo-DepGrp.

In Deployment type, choose In-place deployment.

In Environment configuration, depending on the type of instance you are using, choose the Amazon EC2 instances tab or the On-premises instances tab.

In the Key and Value boxes, type the instance tag key and value that was applied to your instance as part of Step 4: Provision an Instance ont the earlier Wiki part 1.

In the Service role ARN drop-down list, choose your service role ARN. (Follow the instructions in Get the Service Role ARN (Console) if you need to find your service role ARN.)

Choose Create application, and continue to the next step.


1) On the Application details page, in Deployment groups, choose the button next to CodeDeployGitHubDemo-DepGrp.

2) In the Actions menu, choose Deploy new revision.

3) On the Create deployment page, in the Repository type area, choose My application is stored in GitHub.

4) In Connect to GitHub, do one of the following:

To create a connection for AWS CodeDeploy applications to a GitHub account, sign out of GitHub in a separate web browser tab.

In GitHub account, type a name to identify this connection, and then choose Connect to GitHub. The web page prompts you to authorize AWS CodeDeploy to interact with GitHub for the application named CodeDeployGitHubDemo-App.

5)Follow the instructions on the Sign in page to sign in with your GitHub account.

6)To use a connection you have already created, in GitHub account, select its name, and then choose Connect to GitHub.

7) To create a connection to a different GitHub account, sign out of GitHub in a separate web browser tab. Choose Connect to a different GitHub account, and then choose Connect to GitHub. Continue to step 5.

Follow the instructions on the Sign in page to sign in with your GitHub account.

On the Authorize application page, choose Authorize application.

On the AWS CodeDeploy Create deployment page, in the Repository name box, type the GitHub user name you used to sign in, followed by a forward slash (/), followed by the name of the repository where you pushed your application revision (for example, my-github-user-name/CodeDeployGitHubDemo).

If you are unsure of the value to type, or if you want to specify a different repository:

In a separate web browser tab, go to your GitHub dashboard.

In Your repositories, hover your mouse pointer over the target repository name. A tooltip appears, displaying the GitHub user or organization name, followed by a forward slash character (/), followed by the name of the repository. Type this displayed value into the Repository name box.


If the target repository name is not displayed in Your repositories, use the Search GitHub box to find the target repository and corresponding GitHub user or organization name.

In the Commit number click then will display commit details page where you can obtain Commit ID. Copy it then Paste the commit ID into the Commit ID box on Amazon CodeDeploy console.

Choose Deploy, and continue to the next step.

Step 7:

In this step, you will use the AWS CodeDeploy console or the AWS CLI to verify the success of the deployment. You will use your web browser to view the web page that was deployed to the instance you created or configured.


If you’re deploying to an Ubuntu Server instance, use your own testing strategy to determine whether the deployed revision works as expected on the instance, and then go to the next step.

Last Steps:

To delete the AWS CodeDeploy deployment component records

Sign in to the AWS Management Console and open the AWS CodeDeploy console at https://console.aws.amazon.com/codedeploy.


Sign in with the same account or IAM user information you used in Getting Started with AWS CodeDeploy.

If the Applications page is not displayed, on the AWS CodeDeploy menu, choose Applications.

Choose CodeDeployGitHubDemo-App.

On the Application details page, in Deployment groups, choose the button next to the deployment group. On the Actions menu, choose Delete. When prompted, type the name of the deployment group to confirm you want to delete it, and then choose Delete.

At the bottom of the Application details page, choose Delete application.

When prompted, type the name of the application to confirm you want to delete it, and then choose Delete.

Amazon EC2 Code Deploy with Git Hub -Part 1

Step 1:

Create an IAM Instance Profile for Your Amazon EC2 Instances

Your Amazon EC2 instances need permission to access the Amazon S3 buckets or GitHub repositories where the applications that will be deployed by AWS CodeDeploy are stored. To launch Amazon EC2 instances that are compatible with AWS CodeDeploy, you must create an additional IAM role, an instance profile.

Which we will create later

You can create an IAM instance profile with the AWS CLI, the IAM console, or the IAM APIs.


You can attach an IAM instance profile to an Amazon EC2 instance as you launch it or to a previously launched instance later.

From some of the forum they insisted you cannot create and add roles to an instance which may be true for older versions but I tested its works fine to attach new Roles.

Lets Start:

Create an IAM Instance Profile for Your Amazon EC2 Instances (Console)

Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.

In the IAM console, in the navigation pane, choose Policies, and then choose Create policy. (If a Get Started button appears, choose it, and then choose Create Policy.)

On the Create policy page, paste the following in the JSON tab:

“Version”: “2012-10-17”,
“Statement”: [
“Action”: [
“Effect”: “Allow”,
“Resource”: “*”

Note:recommend that you restrict this policy to only those Amazon S3 buckets your Amazon EC2 instances must access. Make sure to give access to the Amazon S3 buckets that contain the AWS CodeDeploy agent. Otherwise, an error may occur when the AWS CodeDeploy agent is installed or updated on the instances. To grant the IAM instance profile access to only some AWS CodeDeploy resource kit buckets in Amazon S3, use the following policy but remove the lines for buckets you want to prevent access to:

“Version”: “2012-10-17”,
“Statement”: [
“Effect”: “Allow”,
“Action”: [
“Resource”: [

Note: keep only appropriate zone are required

Choose Review policy.

On the Create policy page, type CodeDeployDemo-EC2-Permissions in the Policy Name box.

(Optional) For Description, type a description for the policy.

Choose Create Policy.

In the navigation pane, choose Roles, and then choose Create role.

On the Select role type page, choose AWS service, and from the Choose the service that will use this role list, choose EC2.

From the Select your use case list, choose EC2.

Choose Next: Permissions.

On the Attached permissions policy page, if there is a box next to CodeDeployDemo-EC2-Permissions, select it, and then choose Next: Review.

On the Review page, in Role name, type a name for the service role (for example CodeDeployDemo-EC2-Instance-Profile), and then choose Create role.

You can also type a description for this service role in the Role description box.

You’ve now created an IAM instance profile to attach to your Amazon EC2 instances.

Step 2:Launch an Amazon EC2 Instance (Console)

Sign in to the AWS Management Console and open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.

In the navigation pane, choose Instances, and then choose Launch Instance.

On the Step 1: Choose an Amazon Machine Image (AMI) page, from the Quick Start tab, locate the operating system and version you want to use, and then choose Select.

On the Step 2: Choose an Instance Type page, choose any available Amazon EC2 instance type, and then choose Next: Configure Instance Details.

On the Step 3: Configure Instance Details page, in the IAM role list, choose the IAM instance role you created in on earlier step.

On the Step 4: Add these security policy for your instance


HTTP TCP 80 ::/0



HTTPS TCP 443 ::/0

Expand Advanced Details.

Next to User data, with the As text option selected, type the following to install the AWS CodeDeploy agent as the Amazon EC2 instance is launched.

For Amazon Linux and RHEL

yum -y update
yum install -y ruby
cd /home/ec2-user
curl -O https://bucket-name.s3.amazonaws.com/latest/install
chmod +x ./install
./install auto

Note: Here is a note for bucket list based on location:

Region name bucket-name replacement Region identifier

US East (Ohio) aws-codedeploy-us-east-2 us-east-2

In my scenario my region is US East (Ohio)

You can find your bucket name list from this URL:

Step 3:
Now instance is running – you need to make sure your instance is ready for code deploy agent and require few extra works.

Extra works summary: Create Key pair and download to connect to instance. Once connected you can perform below task :

Connect Command line of Amazon Linux AMI

a. When server is booted

1. Create IAM Roles
CodeDeploy & EC2CodeDeploy
2. Create EC2 instance with following categories

a. Choose AMI: Amazon Linux AMI

b. Choose Instance type: t2.micro

c. Configure Instance: Choose EC2CodeDeploy IAM role

d. Tag Instance: Name it what you please

e. Configure Security Group:


HTTP TCP 80 ::/0



HTTPS TCP 443 ::/0


3. Login to EC2 instance

4. Command line of Amazon Linux AMI

a. When server is booted

sudo su

yum -y update

yum install -y aws-cli

cd /home/ec2-user

b. Here you will setup your AWS access, secret, and region.

aws configure

aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . –region us-east-1 (if in east AWS)

aws s3 cp s3://aws-codedeploy-us-west-2/latest/install . –region us-west-2 (if in west AWS)

chmod +x ./install

c. This is simply a quick hack to get the agent running faster.

sed -i “s/sleep(.*)/sleep(10)/” install

./install auto

d. Verify it is running.

service codedeploy-agent status


d. Verify it is running.

service codedeploy-agent status

Now this should display if the agent is Running.

Result should be as :

The AWS CodeDeploy agent is running as PID 8316

Check the Next Post ON the Wiki

How is leadership changing in the 21st century?

In the 21st Century there are millions of innovators but few authentic innovation leaders, able to inspire and guide innovators and their teams to create breakthrough products with their innovative ideas. The essential elements of knowing yourself and your unique leadership gifts, your motivations, and the purpose of your leadership covered by Professor Bill George, Senior Fellow, Harvard Business School

How is leadership changing in the 21st century?

We’re going from hierarchical organizations of the 20th century to empowering organizations of the 21st century.  And frankly, hierarchical leadership is out, but empowering leadership is in.

So you need to think about yourself in that role.

We’re moving away from bureaucracy, and I see bureaucracy as the enemy of innovation, to a series of independent and interdependent units in organizations.

Each of who are working together to create their own innovations, but maybe overall, they’re creating a great system.  Like we have in Alphabet, which originated with Google and still contains Google, but has all kinds of creative ideas. From self-driving cars, to expanding the lifespan to 150 years, breakthrough amazing ideas, and these come out of innovative teams. They don’t all come from the top, they come from the bottom up.

We’re going from an era of limited information to one of transparency, and that’s good, so we share information.  It’s not about secretive information.  We’re going from an era where leaders were thought of as having great charisma and great style.

It’s not about that, it’s being authentic and open. Maybe you’re an introvert. Many of the great leaders are introverts.

There’s no criteria for a great leader. It’s being who you are. But it is important that this is not about your self-interest.

But it’s about recognizing that you’re leading in service to others and a greater cause.

What are some of the factors that are influencing us as we go on this journey?

Well, one clearly is globalization. There are great ideas all around the world. And one of the things we hope to do through this course is bring those ideas and put them on the table and honor them, regardless of where they come from.

The source isn’t important, it’s the idea that’s important. The second, technology and social media are changing everything, even the ability of a course like this to be taught not in a classroom, but to be taught remotely.

And the millennials are having a dramatic impact on our thinking. Millennials don’t wanna wait their turn in line.  They don’t wanna wait ten years to get into the opportunity to make a difference.

They wanna do it right now, and I say, that’s great.

When I was 27 years old, I was given the opportunity to run the consumer microwave oven business for Litton Industries. There was no consumer market. We had to create it. And we went from a company that was serving restaurants of about 10 million to about 200 million in six, seven years. That was one of the great innovation experiences of my lifetime.

Frankly, I never designed a microwave oven in my life. But we had the opportunity to design really creative products and to lead those teams.

And then to have creative marketing teams that showed people how to use the product, and that was a really wonderful experience. So don’t let anyone ever say that millennial are too young to take on these responsibilities, nonsense. If you see the breakthroughs of a Mark Zuckerberg at Facebook, you get an idea how a leader can mature through the actual experience.

And we should celebrate diversity. Diversity is not a threat. We want diversity of ideas. We want diversity of life experiences, cuz a lot of those ideas come from there. And by having diversity on our teams, we’re gonna have a stronger team. So those four factors are having a big impact on how we think about ourselves today as leaders.

Let’s address the question of why do innovation leaders fail, they don’t fail for lack of IQ. I’ve studied dozens of innovation leaders who have failed, they failed because they lacked emotional intelligence.

What do I mean by that?

Let’s look at some of the qualities of emotional intelligence that lead to failures. They lack self awareness, they didn’t know what they were about, they run an ego trip and it thought it was all about them and they hadn’t made that long journey from I to we. Or they’re unable to face reality that what they’re doing is now working and admit their mistakes and listen to the other people on the team that are maybe telling them, hey, this is not working.

It’s not at all unusual to take a great innovation, try it in the marketplace and find out that the consumer or the customer or the patient in Medtronics case, didn’t work for them.

And you have to make adjustments, that didn’t mean it was a bad idea it just needs to be shaped, and mold it to the needs of the people you’re serving. They may lack of passion for the purpose of this innovation and the solidification on their values on the people on their team, but they may lack of compassion for the people they’re serving or for the empathy for the people that work with them.

And most significantly, I found that innovation leaders, if they fail, it’s cuz they lacked the courage to try something that’s really new on a breakthrough idea, that’s gonna change the world, they pull back out of fear of failure.

So if you think about those qualities, passion, compassion, empathy, courage, these are all matters of the heart. And I think great innovation leaders have to take the head if you will the IQ, and integrate that with the heart.

As someone once told me the longest journey you’ll ever take is the eighteen inches from your head to your heart, and great innovation leaders that I’ve studied have the capacity to take.

Their brains if you will, their analytical abilities and integrate that into emotional intelligence and all of these qualities of the heart.

  • So ask yourself, do you have a passion for your work?
  • Do you have compassion for the people you’re serving and
  • the challenges they face?
  • Do you have empathy for the people on your team?
  • And do you have the courage to transform the way the world works?

If you have those qualities you’re well on your way,to becoming an innovation leader.

Professor: Mr. Bill George, Senior Fellow, Harvard Business School, former Chair & CEO of Medtronic

Ethereum – An Alternative to Bitcoin

Ether Historical Market Capitalization Chart (Source: https://etherscan.io/chart/marketcap)

“Ethereum is an open blockchain platform that lets anyone build and use decentralized applications that run on blockchain technology”.

The Ethereum blockchain platform facilitates scripting functionality, or ‘smart contracts‘, which are run through the nodes in the network. As a result, unlike the Bitcoin blockchain, it does not just track transactions, it also programs them. Technically, Ethereum is a Turing-complete virtual machine with its native cryptocurrency called ‘ether’. The platform was proposed in 2013 in a white paper by the then 19-year old Vitalik Buterin.

As of October 2017, Ethereum had a market cap of over $28 billion, making ether the second most valuable cryptocurrency after Bitcoin. 

Ethereum applications do not have a middleman; instead, users interact in a P2P fashion with other users through a variety of interfaces – social, financial, gaming, etc. Since the applications are developed on the decentralized consensus-based network itself, third-party censorship is virtually impossible. Malicious actors cannot secretly tamper with the application by changing the code and compromise all application users (or nodes that are actively interacting with it). These Decentralized Applications have come to be known as Dapps.

Since they are cryptographically secured, Dapps are referred to as ‘secure applications’. Some of the high profile Dapps built on the Ethereum platform include:

  • Augur, which is a Decentralized Prediction Market. Learn more at https://augur.net/
  • Digix, which tokenizes gold on Ethereum. Learn more at: https://digix.global/.
  • Maker, which is a Decentralized Autonomous Organization (DAO). Learn more at: https://makerdao.com/.

Since they are cryptographically secured, Dapps are referred to as ‘secure applications’. Some of the high profile Dapps built on the Ethereum platform include:

The Ethereum network is a distributed global public network, which means it is not run on central servers in a certain geographical location. Instead, the computing power that runs the network is contributed by nodes that are spread across the globe. In other words, Dapps have ‘zero downtime’ – they never go down and, in general, cannot be switched off.


Basic about Blockchains

A distributed ledger is a type of data structure which resides across multiple computer devices, generally spread across locations or regions.

Distributed Ledger Technology includes blockchain technologies and smart contracts. While distributed ledgers existed prior to Bitcoin, the Bitcoin blockchain marks the convergence of a host of technologies, including timestamping of transactions, Peer-to-Peer (P2P) networks, cryptography, and shared computational power, along with a new consensus algorithm.

In summary, distributed ledger technology generally consists of three basic components:

  • A data model that captures the current state of the ledger
  • A language of transactions that changes the ledger state
  • A protocol used to build consensus among participants around which transactions will be accepted, and in what order, by the ledger.

What is Blockchain?

A blockchain is a peer-to-peer distributed ledger, forged by consensus, combined with a system for smart contracts and other assistive technologies.

Together, these can be used to build a new generation of transactional applications that establish trust, accountability, and transparency at their core, while streamlining business processes and legal constraints.

With all distributed ledgers, there’s an initial record or, in this case, a block, or a genesis block.

Each block will include one or more transactions. Connecting to a blockchain involves people connecting to this distributed ledger via, typically, an application.

So, an example of this would be a wallet.

One person may transfer ownership of a digital asset, like a cryptocurrency, from one person to another, and that asset will move from one person’s wallet to another person’s wallet, and then, that transaction will be shown on a blockchain.

So, this distributed ledger transaction, such as a payment, will move from peer-to-peer throughout the network, and there’s no intermediaries, like a bank, or a payment company, to process this transaction.

Can you give an example of a recent blockchain ?

Sure, Blockchain is actually best known because of Bitcoin. And Bitcoin’s blockchain has been in existence since 2009. It’s a cryptocurrency, but, interestingly, people confuse the two terms. Blockchain is actually not Bitcoin, or vice versa.

Blockchain is a distributed ledger. The blockchain then tracks various assets, other than cryptocurrencies, such as Bitcoin.

Those transactions are grouped into blocks, and there can be any number of transactions per block.

Turns out, nodes or machines on a blockchain network group up these transactions and they send them throughout the network. So, you’ve mentioned how blockchains operate on peer-to-peer nodes.

How do they all sync up?

So, the process of blockchains syncing up have to do with a concept of consensus – an agreement among the network peers.

So, eventually, each machine has an exact copy of the blockchain throughout the network.

What is Smart Contact?

Smart contracts are simply computer programs that execute predefined actions when certain conditions within the system are met.

Consensus refers to a system of ensuring that parties agree to a certain state of the system as the true state.

What is the difference between distributed ledger technology and blockchain technology?

Okay… there actually… I mean, so, the difference between the term distributed ledger and the term blockchain has gotten pretty muddy out there in the broader world.

For me, personally, distributed ledgers are a great, very specific way to talk about this new kind of decentralized database, right,

this system, so that you, and I, and everyone else out there, have a copy of a series of transactions that is kept absolutely in sync.

The specific term for that data structure used to be called blockchain, but now, everybody seems to be applying the term blockchain to anything on the spectrum, from cryptocurrencies to enterprise deployments of DLTs.

So, really, I think blockchain has kind of become like the Internet, right, like a term so broad…

But still, you know, it’s nice to describe this new set of technologies that is inclusive of DLTs and even smart contract functionality.

What is block consists of?

A block commonly consists of four pieces of metadata:

  • The reference to the previous block
  • The proof of work, also known as a nonce
  • The timestamp
  • The Merkle tree root for the transactions included in this block.

Data Structure:

“Merkle trees are used to summarize all the transactions in a block, producing an overall digital fingerprint of the entire set of transactions, providing a very efficient process to verify whether a transaction is included in a block.”

The Merkle tree, also known as a binary hash tree, is a data structure that is used to store hashes of the individual data in large datasets in a way to make the verification of the dataset efficient. It is an anti-tamper mechanism to ensure that the large dataset has not been changed. The word ‘tree’ is used to refer to a branching data structure in computer science, as seen in the image below. According to Andreas M. Antonopoulos, in the Bitcoin protocol.

Step by step guide For LAMP (PHP Laravel, Linux, MySQL)

This article is a step by step tutorial to get started with PHP and laravel in Linux environment ( Ubuntu ). By installing Apache2, Mysql and PHP, your LAMP server is ready to host your PHP application.

At the end of this post, you’ll know how to add your custom domain for your local environment.

Let’s start !!!

As you expected from all kind of Linux tutorials you should first update and upgrade your system by running :

sudo apt-get update 
sudo apt-get upgrade

Now your system and packages system is up to date.

Next, you need to install some basics dependencies to avoid all kind of problems in your workflow

sudo apt-get install -y git curl wget zip unzip

Installing Apache2 server :

sudo apt-get install apache2

To make sure that the server is running you can execute this command in your terminal

sudo systemctl status apache2

It is important to know that all your web content must be under the /var/www/html directory. you can check the Bonus section to make any folder as your root web content to know how to config.

To master Appche2 config you need to master this 6 commands line:

  • a2enmod (apache2 enable mode) : To enable an Apache2 mod like rewrite mode.
  • a2dismod (apache2 disable mode) : To disable an Apache2 mod.
  • a2enconf (apache2 enable Config) : To enable a specific config.
  • a2disconf (apache2 disable config) : To disable a specific config.
  • a2ensite(apache2 enable Site) : To enable a specific app.
  • a2dissite (apache2 disable Site) : To disable a specific app.

Enable rewrite mode

sudo a2enmod rewrite
sudo systemctl restart apache2

Install MySQL

sudo apt-get install mysql-server

Click Enter to validate the first popup, then create a password for your Mysql root user. it’s highly recommended to secure Mysql server by running :


You can read more about improving MySQL installation security

To mange database, there is a lot of SQL clients to use with MySQL like MySQL WorkbenchSQuirreLSQLECTRON or the great Google Extension Chrome MySQL Admin .

Install PHP :

sudo add-apt-repository -y ppa:ondrej/php
sudo apt-get update
sudo apt-get install -y php7.1 php7.1-fpm libapache2-mod-php7.0 php7.1-cli php7.1-curl php7.1-mysql php7.1-sqlite3 \
    php7.1-gd php7.1-xml php7.1-mcrypt php7.1-mbstring php7.1-iconv

As you see above this large command will install php, php-cli and the most important php libraries.

Install Composer :

curl -sS https://getcomposer.org/installer | sudo php -- --install-dir=/usr/local/bin --filename=composer
sudo chown -R $USER $HOME/.composer

Now you are ready to create your first Laravel app.

Test web Server

To test your LAMP server, just create a Laravel application under Apache2 root directory.

cd /var/www/html
composer create-project --prefer-dist laravel/laravel lara_app

Open your browser and you can access to your app through :