When people talk about containers, they usually mean application containers. Docker is automatically associated with application containers and is widely used to package applications and services. But there is another type of container: system containers. Let us look at the differences between application containers vs. system containers and see how each type of container is used:
Application containers are used to package applications without launching a virtual machine for each app or each service within an app. They are especially beneficial when making the move to a microservices architecture, as they allow you to create a separate container for each application component and provide greater control, security and process restriction. Ultimately, what you get from application containers is easier distribution. The risks of inconsistency, unreliability and compatibility issues are reduced significantly if an application is placed and shipped inside a container. Docker is currently the most widely adopted container service provider with a focus on application containers. However, there are other container technologies like CoreOS’s Rocket. Rocket promises better security, portability and flexibility of image sharing. Docker already enjoys the advantage of mass adoption, and Rocket might just be too late to the container party. Even with its differences, Docker is still the unofficial standard for application containers today. Docker Datacenter enables the deployment of containerized apps across multiple environments, from on-premises to virtual private cloud infrastructure. With Docker Datacenter you can provide a Containers as a Service (CaaS) environment for your teams. Deploying Docker Datacenter provides options for container deployment:
As the use of containers increases and organizations deploy them more widely, the need for tools to manage containers across the infrastructure also increases. Orchestrating a cluster of containers is a competitive and rapidly evolving area, and many tools exist offering various feature sets. Container orchestration tools can be broadly defined as providing an enterprise-level framework for integrating and managing containers at scale. Such tools aim to simplify container management and provide a framework not only for defining initial container deployment but also for managing multiple containers as one entity -- for purposes of availability, scaling, and networking. Some container orchestration tools to know about include:
Additionally, the Cloud Native Computing Foundation (CNCF) is focused on integrating the orchestration layer of the container ecosystem. The CNCF’s stated goal is to create and drive adoption of a new set of common container technologies, and it recently selected Google’s Kubernetes container orchestration tool as its first containerization technology.
System containers play a similar role to virtual machines, as they share the kernel of the host operating system and provide user space isolation. However, system containers do not use hypervisors. (Any container that runs an OS is a system container.) They also allow you to install different libraries, languages, and databases. Services running in each container use resources that are assigned to just that container. System containers let you run multiple processes at the same time, all under the same OS and not a separate guest OS. This lowers the performance impact, and provides the benefits of VMs, like running multiple processes, along with the new benefits of containers like better portability and quick startup times.
|
My Quotes
When U were born , you cried and the world rejoiced
Live U'r life in such a way that when you go
THE WORLD SHOULD CRY
Powered by Find-IP.net
Thursday, December 1, 2016
Application Contianer versus System Container
Friday, May 13, 2016
Web Services best practices
- Use XML Schema to define the input and output of your Web Service operations
- A Web Service should be defined with a WSDL (or WADL in case of REST) and all responses returned by the Web Service should comply with the advertised WSDL
- Do not use a proprietary authentication protocol for your Web Service.
- Rather use common standards like HttpAuth or Kerberos.
- Or define username/password as part of your XML payload and expose you Web Service via SSL
- Make sure your Web Service returns error messages that are useful for debugging/tracking problems.
- Make sure to offer a development environment for your service, which preferably runs the same Web Service version as production, but off of a test database rather than production data.
- Important to retain
- Naming conventions
- parameter validation
- parameter order
- No session data
- Resource does not need to be in known state
- request alone contains all information
- Always include version parameter
- Handle multiple formates
- Use heartbeat methods
- method that does nothing with no authentication
- shows service is alive
- method that does nothing with no authentication
- All services should be
- accessible
- documented
- robust
- reliable
- simple
- predictable
- accessible
- Always implement a reliability error listener.
- Group messages into units of work
- Set the acknowledgement interval to a realistic value for your particular scenario.
- Set timeouts (inactivity and sequence expiration) to realistic values for your particular scenario.
- Configure Web service persistence and buffering (optional) to support asynchronous Web service invocation.
- Choose between three transport types: asynchronous client transport, MakeConnection transport, and synchronous transport.
- Using WS-Policy to Specify Reliable Messaging Policy Assertions
- At Most Once
- At Least Once
- Exactly Once
- In Order
- At Most Once
- Define a logical store for each administrative unit (for example, business unit, department, and so on).
- Use the correct logical store for each client or service related to the administrative unit.
- Define separate physical stores and buffering queues for each logical store.
- Using the @Transactional Annotation
- Enabling Web Services Atomic Transactions on Web Services
Thursday, May 5, 2016
Hibernate best practices
- Prefer crawling the object model over running queries
- Querying in hibernate always causes a flush
- Make everying lazy
- First read will be slow but everything else will be cached
- Use second level cache
- Use cascade cautiously
- Hibernate is not good at saving a whole object tree in one go
- Use Field access over method access
- will be faster since no relfection is used
- Use instrumentation
- Don't use auto generated Keys
- you have to wait until the object is persisted before its equals method works
- Use id based Equality
- Write fine-grained classes and map them using
. - Use an Address class to encapsulate street, suburb, state, postcode. This encourages code reuse and simplifies refactoring.
- Declare identifier properties on persistent classes.
- Hibernate makes identifier properties optional. There are all sorts of reasons why you should use them. We recommend that identifiers be 'synthetic' (generated, with no business meaning) and of a non-primitive type. For maximum flexibility, use java.lang.Long or java.lang.String.
- Place each class mapping in its own file.
- Don't use a single monolithic mapping document. Map com.eg.Foo in the file com/eg/Foo.hbm.xml. This makes particularly good sense in a team environment.
- Load mappings as resources.
- Deploy the mappings along with the classes they map.
- Consider externalising query strings.
- Externalising the query strings to mapping files will make the application more portable.
- Use bind variables.
- Even better, consider using named parameters in queries.
- Don't manage your own JDBC connections.
- Hibernate lets the application manage JDBC connections. This approach should be considered a last-resort.
- If you can't use the built-in connections providers, consider providing your own implementation of net.sf.hibernate.connection.ConnectionProvider.
- Consider using a custom type.
- Suppose you have a Java type, say from some library, that needs to be persisted but doesn't provide the accessors needed to map it as a component.
- You should consider implementing net.sf.hibernate.UserType.
- This approach frees the application code from implementing transformations to / from a Hibernate type.
- Understand Session flushing.
- From time to time the Session synchronizes its persistent state with the database.
- Performance will be affected if this process occurs too often.
- You may sometimes minimize unnecessary flushing by disabling automatic flushing or even by changing the order of queries and other operations
within a particular transaction. - In a three tiered architecture, consider using saveOrUpdate().
- When using a servlet / session bean architecture, you could pass persistent objects loaded in the session bean to and from the servlet / JSP layer.
- Use a new session to service each request. Use Session.update() or Session.saveOrUpdate() to update the persistent state of an object.
- In a two tiered architecture, consider using session disconnection.
- Database Transactions have to be as short as possible for best scalability.
- This Application Transaction might span several client requests and response cycles.
- Either use Detached Objects or, in two tiered architectures, simply disconnect the Hibernate Session from the JDBC connection and reconnect
it for each subsequent request. - Never use a single Session for more than one Application Transaction usecase, otherwise, you will run into stale data.
- Don't treat exceptions as recoverable.
- This is more of a necessary practice than a "best" practice.
- When an exception occurs, roll back the Transaction and close the Session.
- If you don't, Hibernate can't guarantee that in-memory state accurately represents persistent state.
- As a special case of this, do not use Session.load() to determine if an instance with the given identifier exists on the database;
- use find() instead.
- Prefer lazy fetching for associations.
- Use eager (outer-join) fetching sparingly.
- Use proxies and/or lazy collections for most associations to classes that are not cached at the JVM-level.
- For associations to cached classes, where there is a high probability of a cache hit, explicitly disable eager fetching using outer-join="false".
- When an outer-join fetch is appropriate to a particular use case, use a query with a left join fetch.
- Consider abstracting your business logic from Hibernate.
- Hide (Hibernate) data-access code behind an interface.
- Combine the DAO and Thread Local Session patterns.
- You can even have some classes persisted by handcoded JDBC, associated to Hibernate via a UserType.
- Implement equals() and hashCode() using a unique business key.
- If you compare objects outside of the Session scope, you have to implement equals() and hashCode().
- If you implement these methods, never ever use the database identifier!
- To implement equals() and hashCode(), use a unique business key, that is, compare a unique combination of class properties.
- Never use collections in the equals() comparison (lazy loading) and be careful with other associated classes that might be proxied.
- Don't use exotic association mappings.
- Good usecases for a real many-to-many associations are rare.
- Most of the time you need additional information stored in the "link table".
- In this case, it is much better to use two one-to-many associations to an intermediate link class.
- In fact, we think that most associations are one-to-many and many-to-one, you should be careful when using any other
- association style and ask yourself if it is really neccessary.
Friday, April 29, 2016
Friday, April 15, 2016
Different Ways of Injecting Dependency in an AngularJS Application
In AngularJS, dependencies can be passed in three possible ways. They are as follows:
- Passing a dependency as Function Arguments
- Passing dependencies as function arguments works perfectly fine until we deploy the application in the production with a minified version of the application.
- Usually to improve the performance, we minify the application in production, but passing the dependency as a function argument breaks when we minify the application.
- This is because the parameter name will change to a shorter alias name.
app.controller("ProductController", function ($scope) { $scope.message = "Hey I am passed as function argument" });
- Passing dependencies as function arguments works perfectly fine until we deploy the application in the production with a minified version of the application.
- Passing a dependency as Array Arguments
- Most popular way of passing a dependency in an AngularJS application is passing them as Array Arguments.
- When we pass a dependency as an Array Argument, the application does not break in production when we minify the application.
- We can do this in two possible ways.
- Using the Named function
var app = angular.module('app', []); function ProductController($scope) { $scope.greet = "Infragistics"; }; app.controller('ProductController', ['$scope', ProductController]);
- We are passing a dependency $scope object in the array along with the name of the controller function.
- More than one dependency can be passed, separated by a comma.
- For example, we can pass both $http service and the $scope object as dependencies
var app = angular.module('app', []); function ProductController($scope,$http) { $scope.greet = $http.get("api.com"); }; app.controller('ProductController', ['$scope','$http', ProductController]);
- We are passing a dependency $scope object in the array along with the name of the controller function.
- Using the Inline Anonymous function
- You can pass dependencies as array arguments exactly the same way you pass them in named controller functions.
- We can pass dependencies in an inline function as array arguments
- Keep in mind that dependencies injected as Array arguments work even if we minify the application.
var app = angular.module('app', []); app.controller('ProductController', ['$scope', '$http', function ($scope,$http) { $scope.greet = "Foo is Great!" }]);
- You can pass dependencies as array arguments exactly the same way you pass them in named controller functions.
- Using the Named function
- Most popular way of passing a dependency in an AngularJS application is passing them as Array Arguments.
- Passing a dependency using the $inject service
by using the $inject service. In doing so, we manually inject the dependencies. We can inject $scope object dependencies using the $inject service function ProductController($scope){ $scope.greet = "Foo is Not Great!5"; } ProductController.$inject = ['$scope']; app.controller('ProductController', ProductController);
Using the $inject service also does not break the application when we minify the application for production. Most often we will find $inject services being used to inject dependencies in unit testing of the controller
- Create a Calculator service
app.factory("Calculator", function () { return { add: function (a, b) { return a + b; } } });
- Use a Calculator service inside CalController
app.controller('CalController', CalController); function CalController($scope, Calculator) { $scope.result = 0; $scope.add = function () { alert("hi22"); $scope.result= Calculator.add($scope.num1, $scope.num2); } };
- At this point, the application should work because dependencies are passed as function arguments.
- However, the application will break when we minify it.
- So, let's go ahead and inject the dependencies using the $inject
CalController.$inject = ['$scope', 'Calculator'];
- On the view, the controller can be used as shown below:
{{result}}
Subscribe to:
Posts
(
Atom
)