Apr 30, 2015

Sonus WebRTC Solution for Cloud @SonusNet | @ThingsExpo [#IoT #WebRTC]

Sonus Networks introduced the Sonus WebRTC Services Solution, a virtualized Web Real-Time Communications (WebRTC) offer, purpose-built for the Cloud. The WebRTC Services Solution provides signaling from WebRTC-to-WebRTC applications and interworking from WebRTC-to-Session Initiation Protocol (SIP), delivering advanced real-time communications capabilities on mobile applications and on websites, which are accessible via a browser.

read more


by via Latest News from JAVA Developer's Journal

Melbourne IT completes Uber Global acquisition

Australian hosting company Melbourne IT has completed the acquisition of cloud services provider and domain registration provider Uber Global.
by via https://www.google.com/url?rct=j&sa=t&url=http://www.telecompaper.com/news/melbourne-it-completes-uber-global-acquisition--1079989&ct=ga&cd=CAIyGTMzMjUxYTFmZTllY2NhZjY6Y2E6ZW46Q0E&usg=AFQjCNEVbsK2z2KHjUHO04L6QriHZNIcrg

Take your logs data to new places with streaming export to Cloud Pub/Sub

By GCP Team


Earlier this year, we announced the beta of the Google Cloud Logging service, which included the capability to:

  • Stream logs in real-time to Google BigQuery, so you can analyze log data and get immediate insights.
  • Export logs to Google Cloud Storage (including Nearline), so you can archive logs data for longer periods to meet backup and compliance requirements.


Today we’re expanding Cloud Logging capabilities with the beta of Cloud Logging Connector that allows you to stream logs to Google Cloud Pub/Sub. With this capability you can stream log data to your own endpoints and further expand how you can make big data useful. For example, you can now transform and enrich the data in Cloud Dataflow before sending it to BigQuery for analysis. Furthermore, this provides easy real-time access to all your logs data, so you can export it to your private cloud or any third party application.

Cloud Pub/Sub
Google Cloud Pub/Sub delivers real-time and reliable messaging in one global, managed service that helps you create simpler, more reliable, and more flexible applications. By providing many-to-many, asynchronous messaging that decouples senders and receivers, it allows for secure and highly available communication between independently written applications. With Cloud Pub/Sub, you can push your log events to another Webhook, or pull them as they happen. For more information, check out our Google Cloud Pub/Sub documentation.
High-Level Pub/Sub Schema
Configuring Export to Cloud Pub/Sub

Configuring export of logs to Cloud Pub/Sub is easy and can be done from the Logs Viewer user interface. To get to the export configuration UI start in the Developers Console, go to Logs under Monitoring and then click Exports on the top menu. Currently this supports export configuration for Google App Engine and Google Compute Engine logs.


One Click Export Configuration in the Developers Console


Transforming Log Data in Dataflow

Google Cloud Dataflow allows you to build, deploy, and run data processing pipelines at any scale. It enables reliable execution for large-scale data processing scenarios such as ETL and analytics, and allows pipelines to execute in either streaming or batch mode. You choose.

You can use the Cloud Pub/Sub export mechanism to stream your log data to Cloud Dataflow and dynamically generate fields, combine different log tables for correlation, and parse and enrich the data for custom needs. Here are a few examples of what you can achieve with log data in Cloud Dataflow:


  • Sometimes it is useful to see the data only for the key applications for top customers. In Cloud Dataflow, you can group logs by Customer ID or Application ID, filter out specific logs, and then apply some aggregation of system level or application level metrics.
  • On the flip side, sometimes you want to enrich the log data to make it easier to analyze, for example by appending marketing campaign information to customer interaction logs, or other user profile info. Cloud Dataflow lets you do this on the fly.
  • In addition to preparing the data for further analysis, Cloud Dataflow also lets you perform analysis in real time. So you can look for anomalies, detect security intrusions, generate alerts, keep a real-time dashboard updated, etc.


Cloud Dataflow can stream the processed data to BigQuery, so you can analyze your enriched data. For more details, please see the Google Cloud Dataflow documentation.


Getting Started

If you’re a current Google Cloud Platform user, the capability to stream logs to Cloud Pub/Sub is available to you at no additional charge. Applicable charges for using Cloud Pub/Sub and Cloud Dataflow will apply. For more information, visit the the Cloud Logging documentation page and share your feedback.

-Posted by Deepak Tiwari, Product Manager

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

DigitalGov’s Inaugural Podcast: How IVR Supports Contact Centers

DevOps: Building Trust By @AppDynamics | @DevOpsSummit [#DevOps]

There are lots of ways DevOps can fail. For all the revolutionary promise of the idea, it takes a tricky cultural shift to get Development and Operations working together. Many companies—especially big ones—take a top-down approach. C-suite execs trumpet a Brand New DevOps Initiative, which everyone else resists, undermines, or ignores. As a developer at a SaaS company, my success depends on Ops. Too often, Dev and Ops are divided by mistrust and rarely talk between releases. Bridging this gap requires finding a way to dial down the tensions that spring from differences in status and divergent incentives.

read more


by via Latest News from JAVA Developer's Journal

Docker Load Balancing in @Rancher_Labs 0.16 By @LemonJet | @CloudExpo [#DevOps]

One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher's UI or API. To implement our load balancing functionality we decided to use HAproxy, which is deployed as a contianer, and managed by the Rancher orchestration functionality. With Rancher's Load Balancing capability, users are now able to use a consistent, portable load balancing service on any infrastructure where they can run Docker. Whether it is running in a public cloud, private cloud, lab, cluster, or even on a laptop, any container can be a target for the load balancer.

read more


by via Latest News from JAVA Developer's Journal

How Much Will My Agency’s Contact Center Cost?

Chairs, computers, and headsets in a modern office

Wavebreakmedia Ltd/Wavebreak Media/Thinkstock

The federal government has caught the customer experience bug. We want our customers to complete their tasks with minimal effort using a streamlined process. If they need personal help, we want it to be quick, polite, and provide the best answer. But that personal help frequently requires a team of highly skilled, dedicated people—a contact center.

When people call to ask how much it will or should cost their agency to have a contact center, I can’t give them an answer. I want to, but until I learn what they need, and the customer experience they want to provide, I really don’t know enough to give an accurate estimate.

It’s similar to buying a car. There’s not one answer to how much it costs to buy car; it depends on what you’re looking for. Do you want a car or an SUV? Automatic or manual? High-performance or economy? Leather or fabric interior? Just as all these elements weigh into the cost of a car, the cost of a contact center is dependent on many factors.

As the manager of the USA.gov Contact Center and the Co-Chair of the Government Contact Center Council (G3C), I get this question about once a month. We’ve developed a set of questions that will help an agency determine the type of contact center support they need.

If you’re starting a contact center, or thinking of changing your model, first, determine the answers to these questions. (Note: this list is available for download [22.5 kb, Ms Word, .docx])

  1. Do you want contractor support or will you be using government full time employees? If contracting out, what level?
    • Customer Service Representative (CSR) (Agent)
    • Management
    • Technology
  2. Inquiries
    • What are the anticipated, most common types of inquiries you expect to receive?
    • Who are your customers/why are they contacting you? Customer demographics impact channels, languages, and hours of operation.
  3. Services to Be Provided
    • Channels: What channels would you like to answer?
      • Phone
      • Web Chat
      • Email
      • Postal mail
      • Social media (Facebook, Twitter)
      • SMS
      • Customer Support App
    • Tiers: What level of agent support is required?
      • Tier 1: Basic, using publicly available information, FAQs
      • Tier 2: Moderately complex
      • Tier 3: Complex, requires subject matter expertise
      • If only Tier 1, do you want Tier 2 or 3 transferred?
        • Warm transfer (CSR stays on line and introduces call to Tier 2)
        • Cold transfer (simply transfer to a designated Agency number and CSR hangs up)
      • How complex is the Tier 2 matrix—how many different Tier 2 contacts are there?
      • What are the hours of availability of Tiers 2 and 3? If they don’t match Tier 1 support, what is the solution for the other hours?
    • Hours of Operation: What are the expected hours of operation?
    • Languages: What languages need to be supported?
      • Note: A commercial, professional language line can provide affordable access to many languages.
  4. Call Volume
    • What is the expected call volume? (daily, weekly, monthly)
    • What is the average handle time (the estimated average length of call)?
    • What is the estimated after-call wrap-up time?
  5. Security
    • Do CSRs need any special security clearance to answer inquiries?
  6. Telecommunication and IVR
    • Does your agency want to use an existing toll free number?
    • Do you want a new toll free number just for this topic?
    • What company provides telecommunications for your agency?
    • Will you use recorded messages (IVRs) for FAQs?
  7. Operations
    • Does your agency have current contact center procedures documented?
    • Does your agency have specific policies documented?
  8. Training
    • What type of CSR training is required?
    • What is the estimated length, in days, of new-hire training?
    • Describe the training process (lectures, role plays, application, information retrieval, all of the above?)
    • Do you have any training materials developed? If no training modules/curriculum are available, who will be responsible for developing and delivering the training?
  9. Data Collection
    • What information needs to be captured about each inquiry?
      • Identify required data fields
      • Purpose of inquiry
      • Demographic data
      • Customer data
      • Other
    • Identify optional data fields
      • Business name
      • Email address
      • Other contact information
    • How will the data be used?
  10. Reports
    • What types of reports are needed?
      • Standard contact center reports
      • Other reports—identify data fields that require reporting
  11. Performance Metrics
    • What are the required performance standards/Key Performance Indicators? (Response Time, Service Level, Quality Assurance Scores, Customer Satisfaction Scores, First Contact Resolution, Service Availability, etc.).
      • Note: High performance standards increase the cost of the service.
  12. Resources/Content
    • What resources/content will be provided to CSRs to answer inquiries?
    • Are standard resources (FAQs) already developed?
    • Maintenance of resources
      • Who will maintain?
      • Frequency of updates required?
  13. Technology
    • What technology will be used to support the contact center?
    • Does the agency own the technology?
    • Who will maintain the technology?
  14. Budget
    • What budget does your agency have for customer support activities?
    • Do you anticipate that this budget will remain steady in future fiscal years?
  15. Timeline for Start Up
    • When do you anticipate contact center support to begin?
    • What is the estimated time frame/duration for contact center support?

Good luck to you as you assess your support needs. If you’re a federal employee who’d like to network with other contact center managers, please join the G3C Community of Practice. We discuss the best practices, research, and trends that improve government contact centers.


by Tonya Beres via DigitalGov

Cloudy with a Chance of Security By @LMacVittie | @CloudExpo [#Cloud]

We found all manner of interesting practices and trends as it relates to cloud and security in our State of Application Delivery 2015 report. One of the more fascinating data points was a relationship between security posture and cloud adoption. That is, it appears that the more applications an organization migrates to the cloud, the less strict its security posture becomes. Really. I was a bit disturbed by that, too. At least at first.

read more


by via Latest News from JAVA Developer's Journal

Containers & Cloud & Security – Oh My! By @EFeatherston | @CloudExpo [#Cloud]

Dorothy the CIO was walking the yellow brick road of planning. She was on her way to the Emerald City to ask the great wizard of the agile data center for advice. Along the way she met two other CIOs who joined her on the journey, nicknamed Tin Man and Scarecrow. Their travels brought them to the edge of the dark forest of datacenter hype and fear. ‘‘Do you think we’ll meet any wild technologies and fears in there?’’ she asked her companions. ‘‘We might.’’ responded Tin Man. ‘One’s that devour IT projects?’’ whispered Scarecrow. ‘‘Possibly.’ said the Tin Man, ‘‘but mostly containers, and clouds, and security.’’ ‘‘Containers, and clouds, and security, oh my!’’ they all murmured in unison as they entered the dark forest.

read more


by via Latest News from JAVA Developer's Journal

Embracing Software Defined By @StorPool | @CloudExpo [#Cloud]

As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, will provide some practical insights on what, how and why when implementing "software-defined" in the datacenter.

read more


by via Latest News from JAVA Developer's Journal

Cloud Expo Launches "Microservices Journal" | @CloudExpo [#Microservices]

The world's leading Cloud event, Cloud Expo has launched Microservices Journal on the SYS-CON.com portal, featuring over 19,000 original articles, news stories, features, and blog entries. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. Microservices Journal offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Follow new article posts on Twitter at @MicroservicesE

read more


by via Latest News from JAVA Developer's Journal

SOA or Microservices? By @Ruxit | @DevOpsSummit [#DevOps #Microservices]

This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you have experience with software architecture and services (you’ll find some words about my experience on the bottom of this post). I won’t go into the details of Wikipedia’s or Martin Fowler’s definitions per se. Much more I will talk about what microservices and/or SOA could and should do for your project.

read more


by via Latest News from JAVA Developer's Journal

Mobile-Friendly Park Websites on NPS.gov

Mobile-friendly NPS.gov website on phonePark websites on NPS.gov from A (Acadia) to Z (Zion) are now mobile-friendly. Visitors using phones and tablets to visit national park websites now have a user-friendly experience to enhance their virtual visits. Previously, visitors using mobile devices saw a smaller version of the website scaled to fit the size of their screen. Now, the content will adjust to fit small screens while providing the same functionality available to those visiting the site using a desktop or laptop.

Other types of sites that use the NPS.gov content management system will become mobile-friendly in the coming weeks. This includes both subject and organization sites, such as the Centennial subject site and the Office of Communications organization site, and national-level pages, such as Find a Park.

A team of National Park Service (NPS) designers, developers, and testers made this project possible.

Project Overview

Improving the quality of the overall user experience of NPS.gov is a major priority of NPS leadership as the bureau approaches its centennial in 2016. The project to make NPS.gov mobile-friendly helps address the National Park Service’s Call to Action Item 17: Go Digital, which outlines the need to “create a user-friendly Web platform that supports online and mobile technology including social media.” The project will allow NPS.gov to function well on the wide variety of devices, browsers, and network speeds used by visitors to access NPS Web content. Our approach to accomplish that goal is known as responsive Web design (RWD). Instead of having separate websites for each type of device, RWD features a more fluid design that adapts to a wide range of devices.

Benefits

Mobile-friendly sites on NPS.gov:

  • Search engine results indicating that an NPS park website is mobile-friendlyProvide a better user experience on web-enabled devices of all shapes and sizes. Right now, almost 40% of NPS.gov visitors use tablets or mobile devices to access the site. The responsive design will better serve those visitors.
  • Ensure that audiences whose only Internet access is through mobile devices have good usability on NPS.gov.
  • Appear higher in Web searches on those search engines that factor mobile friendliness when displaying results.
  • Use the same Web design code as all sites using the NPS content management system, rather than having different code for desktops/laptops and for mobile devices.
  • Improve speed and performance for visitors.
  • Are ready for whatever devices come in the future!

Timeline

  • Park websites: As of April 20, 2015, all NPS park websites are mobile-friendly. Internal accessibility and usability testing on the mobile-friendly park websites has already been performed. External testing using a pool of 60 NPS volunteers is now being conducted.
  • NPS.gov homepage, subject, and organization sites: The design and testing of the styles and code that will make the NPS.gov homepage, subject sites, and organization sites mobile-friendly is complete. Those pages and sites will be updated between now and the end of May 2015.
  • National pages: Other national-level pages on NPS.gov managed by the Web team (for example, Find a Park and About Us) will be made mobile-friendly by the end of May 2015.

The new centennial design that will launch in January 2016 will also be mobile-friendly.

About NPS.gov

NPS.gov is the public Web presence of the parks and programs of the National Park Service as well as the primary destination for virtual visitors looking to plan trips to parks or learn more about our nation’s natural and cultural heritage. NPS.gov includes websites for the more than 400 places the NPS cares for and the programs that make that possible. Management is distributed among NPS employees in parks, regions, and national offices across the United States. A quick look at 2014’s NPS.gov figures show:

  • 100,000+ Web pages
  • 75 million website visitors
  • 125 million sessions
  • 490 million pageviews

Todd Edgar is the NPS.gov Web manager in the National Park Service Office of Communications.


by Ashley Wichman via DigitalGov

Announcing @ColumnIT to Exhibit at @DevOpsSummit New York [#DevOps]

SYS-CON Events announced today that Column Technologies, a global technology solutions company, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Established in 1998, Column Technologies is a leader in application performance and infrastructure management for commercial and federal markets. The company is headquartered in the United States, with a diverse and talented team of more than 350 employees around the world, and offices in Canada, India and the United Kingdom.

read more


by via Latest News from JAVA Developer's Journal

Announcing @CodeFutures Named “Sponsor” of @CloudExpo New York [#Cloud]

SYS-CON Events announced today that CodeFutures, a leading supplier of database performance tools, has been named a “Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. CodeFutures is an independent software vendor focused on providing tools that deliver database performance tools that increase productivity during database development and increase database performance and scalability during production.

read more


by via Latest News from JAVA Developer's Journal

Continuous Delivery with Docker Containers and Java EE

By Thomas Qvarnström Tuesday the 28th Markus Eisele and I hosted a webinar about Continuous Delivery with Docker Containers and Java EE. Me and Markus has co-authored an blog entry about it and published it his blog here (http://blog.eisele.net/2015/04/continuous-delivery-with-docker.html). Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

'Easy Entry' Pricing from @OpenMake Software | @DevOpsSummit [#DevOps]

OpenMake Software announced a new and 'easy entry' pricing plan for their Dynamic DevOps Suite which includes OpenMake Meister and OpenMake Release Engineer. The new pricing strategy is based on an annual subscription rate that provides you continuous integration, build automation, configuration management and Application Release Automation (ARA) for as low as $1,250 per year, per software application, including maintenance. This brings a very complete and scalable suite available to the market with minimal acquisition costs for a proven solution used globally by hundreds of companies.

read more


by via Latest News from JAVA Developer's Journal

Smarter DevOps By @Automic | @DevOpsSummit [#DevOps]

Those from a dev-centric background might discount monitoring from their DevOps approach. But if you're already automating deployment and testing, why wouldn't you use the right set of tools to avoid manual performance monitoring? Of course, you do performance tests each time you release changes to pre-production. You're checking for response time, processor utilization, memory occupation, I/O activity and so on, but how can you ensure normal behavior without a reference point? In other words - despite your good intentions - aren't you using critical production as a first level of benchmark? If you run pre-production performance tests regularly and you can compare those results to what is seen in production, there is a greater chance of detecting the processor utilization jump that caused slowdowns and then relate it to a code change. But the reality is that most developers lack the ability to monitor how their code performs in the test, staging and production environments.

read more


by via Latest News from JAVA Developer's Journal

Peter Dunkley Named @WebRTCSummit Chair @ThingsExpo [#WebRTC #IoT]

WebRTC Summit has announced today that Peter Dunkley has been named summit chair of WebRTC Summit 2015 New York. The 4th International WebRTC Summit will take place on June 9-11, 2015, at the Javits Center in Manhattan, New York. @ThingsExpo anticipates 90% of WebRTC companies & developers will monetize their products & services through IoT by 2016. Peter Dunkley is Technical Director at Acision. He graduated from The University of Edinburgh in 2000 with a BSc (Hons) in Computer Science. After graduation Peter worked on a PSTN switch developing signalling stacks for SS7, ISDN and similar protocols and creating advanced routing and service applications. Since 2005 he has worked mainly with SIP first leading a team developing a PSTN gateway and then managing the development of a SIP Application Server. Peter joined Crocodile RCS in September 2010 and has made numerous contributions to the Kamailio open source SIP Router project (particularly in the areas of presence, WebSocket, MSRP, and SIP Outbound) since then. Peter is one of the authors of the MSRP over WebSocket draft (draft-pd-dispatch-msrp-websocket) and is a contributor to several open-source projects.

read more


by via Latest News from JAVA Developer's Journal

Continuous Delivery with Docker Containers and Java EE

By unknown

Organizations need a way to make application delivery fast, predictable and secure and the agility provided by containers, such as docker, help developers realize this goal. For Java EE applications, this enables packaging of applications, the application server, and other dependencies in a container that can be replicated in build, test, and production environments. This takes you one step closer to achieving continuous delivery. At least this was the abstract on the Webinar Thomas and I have been giving a couple of days ago. This is the supporting blog-post with a little more details about the setup including all the links to the source code and the demo. Find a more detailed technical walkthrough in the developer interview also embedded below. A big thank you to my First we’re going to talk a bit about why everybody is keen on optimizing application delivery these days. Increasingly complicated applications are putting even more pressure on infrastructures, Teams and processes. Containers promise to bring a solution by keeping applications and their runtime components together.
But let’s not stop there and look beyond, what seems to be a perfect topic for operations. It leaks more and more into the developer space. And as a developer it is easy to ignore latest hypes by just concentrating on what we can do best: Delivering functioning applications. But honestly, there is more to it. Especially Java EE requires more than just code. So, containers promise to make our lives easier.
Just talking about containers isn’t the whole story. They have to be usable and out there in production for developers to finally use them. This is where we’re going to briefly sneak into what is upcoming with OpenShift v3 and how this fits into the bigger picture.
After this brief introduction, Thomas is going to walk you through the details, starting with Docker Containers and how they allow for a complete Continuous delivery Chain which fully supports DevOps.

But why do we need containers? And why now?
Most importantly, the new architecture approaches like micro-services drive us away from large-VMs and physical servers running monolithic applications. Individually bootstrapped services are a natural fit for container based deployment, because everything needed to run them is completely encapsulated. Plus, the urge for optimized operations is driving more and more infrastructures into the cloud model. We will see containers as a service offers, which will be faster to deploy, cheaper to run, and be easier to manage than VMs. Enterprises will run PaaS products that focus on enterprise-class operations using Containers as a target. Distributing software in containerised-packages instead of virtual machines is far more complete and more standardized with Containers. Easier to adapt to different suppliers and vendors. No matter what language or runtime the product is built for. Enterprises don’t necessarily have to focus on a single platform anymore to achieve optimized operations and costs. The container infrastructure allows a more heterogeneous technology base by holding up standardized operational models and having the potential for future optimizations and add-ons for example around security.Containers and their management systems are the glue between developers and operators and are a technological layer to support the DevOps movement. To make it short: Containers are ready.

What do I as a Java EE developer gain from all of that?
Containers are about what’s inside of them, not outside of them. It’s easy to compare this with PaaS offerings. Developers don’t want to care about configuration or hosting. They just want a reliable runtime for their applications. There’s not a lot beside Containers what you need. Standard formats, standard images and even the option to use a company wide hub for them, will make development teams a lot more efficient. And this does also relate to how we will setup local environments and roll them out into our teams. Differently configured instances can be spun up and teared down in seconds. No need to maintain different versions of middleware or databases and messing around with paths or configurations. Preconfigured Containers will reduce team setup times significantly and allow for testing with different configurations more easily. Images can be centrally developed, configured and maintained. According to corporate standards and including specific frameworks or integration libraries. Responsibility and education are the key parts in terms of motivation. Today’s full stack developer want to be responsible for their work of art – End to End. Programming stopped being a tedious job using the same lame APIs day in and day out. As a matter of fact, Containers allow for a complete round-trip from building to packaging and shipping your applications through the different environments into production. And because everything can be versioned and centrally maintained and relies on the same operating system and configuration in any environment the complete software delivery chain is a lot more predictable with Containers.

How OpenShift fits into all of that?
The perfect example how the market is shifting towards containers is OpenShift. It comes in different editions:

  • OpenShift Origin is the Open Source Project for Red Hat’s cloud offering
  • OpenShift Online is Red Hat’s public cloud application development and hosting platform that automates the provisioning, management and scaling of applications so that you can focus on writing the code for your business, startup, or next big idea. Try out yourself by signing up on openshift.com
  • OpenShift Enterprise is the an on-premise, private Platform as a Service (PaaS) solution offering that allows you to deliver apps faster and meet your enterprise’s growing application demands.

Depending on your needs you’re free to pick the solution that best fits your needs. From building your own PaaS with Origin to running a fully supported on-premise PaaS yourself.
And we’re going big with the next version of OpenShift! With each milestone of Origin comes a new version of OpenShift. And now that the Origin source code repository for OpenShift 3 is available. It is progressing towards a whole new architecture entirely re-engineered from the ground up. This new architecture integrates Docker and the Kubernetes container orchestration and management system, available on an Atomic host optimized for running containerized applications. On top of all that, OpenShift will incorporate effective and efficient DevOps workflows that play a critical role in platform-as-a-service to accelerate application delivery.

What will OpenShift v3 Look Like?
OpenShift adds developer and operational centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams and applications.
Starting at the bottom of everything, Red Hat has been working with the Docker community to evolve our existing containers technology and drive a new standard for containerization through the libcontainer project. This work lead to announcing Docker support in RHEL 7 and the launch of Project Atomic to develop a new container-optimized Linux host. This new container architecture is at the core of OpenShift v3.
The OpenShift v3 Cartridge format will adopt the Docker packaging model and enable users to leverage any application component packaged as a Docker image. This will enable developers to tap into the Docker Hub community to both access and share container images to use in OpenShift
In OpenShift v3, we will be integrating Kubernetes in the OpenShift Broker to drive container orchestration.
OpenShift v3 will bring new capabilities for provisioning, patching and managing application containers, routing and networking enhancements, and provisioning and managing the OpenShift platform itself. The goal is to deliver a best of breed user experience for OpenShift developers.
Be excited for the upcoming release!

The Complete Demo
And now it is time to grab a #coffee+++ and sit back to relax the demo in 30 instead of just 10 minutes. Thomas is going to cover all the details and I was nice enough to ask some nasty questions in between.

And here is an architectural overview as a prezi presentation, which Thomas showed in the webcast.

Links and Further Readings
Some food for thought and homework. The link collection form the webinar plus some more resources for you to dig through.

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

What is Trolling?

If you engage in discussion on the Internet long enough, you're bound to encounter it: someone calling someone else a troll.

The common interpretation of Troll is the Grimms' Fairy Tales, Lord of the Rings, "hangs out under a bridge" type of troll.

Thus, a troll is someone who exists to hurt people, cause harm, and break a bunch of stuff because that's something brutish trolls just … do, isn't it?

In that sense, calling someone a Troll is not so different from the pre-Internet tactic of calling someone a monster – implying that they lack all the self-control and self-awareness a normal human being would have.

Pretty harsh.

That might be what the term is evolving to mean, but it's not the original intent.

The original definition of troll was not a beast, but a fisherman:

Troll

verb \ˈtrōl\

  1. to fish with a hook and line that you pull through the water

  2. to search for or try to get (something)

  3. to search through (something)

If you're curious why the fishing metaphor is so apt, check out this interview:

There's so much fishing going on here someone should have probably applied for a permit first.

  • He engages in the interview just enough to get the other person to argue with them. From there, he fishes for anything that can nudge the argument into some kind of car wreck that everyone can gawk at, generating lots of views and publicity.

  • He isn't interested in learning anything about the movie, or getting any insight, however fleeting, into this celebrity and how they approached acting or directing. Those are perfunctory concerns, quickly discarded on the way to their true goal: generating controversy, the more the better.

This guy is a classic troll.

  1. He came to generate argument.
  2. He doesn't truly care about the topic.

Some trolls can seem to care about a topic, because they hold extreme views on it, and will hold forth at great length on said topic, in excruciating detail, to anyone who will listen. For days. Weeks. Months. But this is an illusion.

The most striking characteristic of the worst trolls is that their position on a given topic is absolutely written in stone, immutable, and they will defend said position to the death in the face of any criticism, evidence, or reason.

(Protip: do not be that person. Please.)

Look. I'm not new to the Internet. I know nobody has ever convinced anybody to change their mind about anything through mere online discussion before. It's unpossible.

But I love discussion. And in any discussion that has a purpose other than gladiatorial opinion bloodsport, the most telling question you can ask of anyone is this:

Why are you here?

Did you join this discussion to learn? To listen? To understand other perspectives? Or are you here to berate us and recite your talking points over and over? Are you more interested in fighting over who is right than actually communicating?

If you really care about a topic, you should want to learn as much as you can about it, to understand its boundaries, and the endless perspectives and details that make up any complex topic. Heck, I don't even want anyone to change your mind. But you do have to demonstrate to us that you are, at minimum, at least somewhat willing to entertain other people's perspectives, and potentially evolve your position on the topic to a more nuanced, complex one over time.

In other words, are you here in good faith?

People whose actions demonstrate that they are participating in bad faith – whether they are on the "right" side of the debate or not – need to be shown the door.

So now you know how to identify a troll, at least by the classic definition. But how do you handle a troll?

You walk away.

I'm afraid I don't have anything uniquely insightful to offer over that old chestnut, "Don't feed the trolls." Responding to a troll just gives them evidence of their success for others to enjoy, and powerful incentive to try it again to get a rise out of the next sucker and satiate their perverse desire for opinion bloodsport. Someone has to break the chain.

I'm all for giving people the benefit of the doubt. Just because someone has a controversial opinion, or seems kind of argumentative (guilty, by the way), doesn't automatically make them a troll. But their actions over time might.

(I also recognize that in matters of social justice, there is sometimes value in speaking out and speaking up, versus walking away.)

So the next time you encounter someone who can't stop arguing, who seems unable to generate anything other than heat and friction, whose actions amply demonstrate that they are no longer participating in the conversation in good faith … just walk away. Don't take the bait.

Even if sometimes, that troll is you.

[advertisement] How are you showing off your awesome? Create a Stack Overflow Careers profile and show off all of your hard work from Stack Overflow, Github, and virtually every other coding site. Who knows, you might even get recruited for a great new position!


via Coding Horror http://blog.codinghorror.com/what-is-trolling/

Apr 29, 2015

Hibernate ORM 5.0.0.Beta2 Release

By Steve Ebersole
I have just finished releasing Hibernate O/RM 5.0.0.Beta2. Beyond Beta1, this release adds:

  1. Support for Spatial/GIS data through importing Hibernate Spatial.
  2. Complete redesign of bulk id tables used to support update/delete queries against multi-table structures. The redesign helps better fit what different databases support.
  3. Redesign of transaction management
  4. Much improved (and still improving!) schema management tooling for export, validation and migration.

At this point, 5.0.0 is getting a lot of testing. So even though it is still in Beta I am feeling pretty confident of its quality. I opted for another Beta here instead of CR1 for a few reasons:

  1. Investigate whether we want to convert Hibernate’s native APIs (Session, etc) to be typed. There is one especially tricky case that needs to be figured out. A major release like this would be the time to do that
  2. I have just introduced some pretty significant Transaction changes since Beta1. I felt it would be prudent to have one more Beta to allow people time to try out those changes and allow for additional changes based on feedback
  3. I would still like to complete deprecating the Settings contract. The last piece there is the discussion I started earlier on the dev list wrt its usage in SPI contracts (L2 cache, etc). This effects a few integrations.
  4. I am working on better Karaf support for hibernate-osgi. Specifically creating a Karaf features respository that users can simply pick up and use. That work is well under way, but ongoing.

As always, see http://hibernate.org/orm/downloads/ for information on obtaining Hibernate O/RM.

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

Announcing @Gridstore to Exhibit at @CloudExpo New York [#Cloud]

SYS-CON Events announced today that Gridstore™, the leader in hyper-converged infrastructure purpose-built to optimize Microsoft workloads, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Gridstore™ is the leader in hyper-converged infrastructure purpose-built for Microsoft workloads and designed to accelerate applications in virtualized environments. Gridstore’s hyper-converged infrastructure is the industry’s first all flash version of HyperConverged Appliances that include both compute and storage in a single system, and Storage Nodes that provide external storage and work with any Windows servers and the HyperConverged Appliance, all driven by Gridstore’s patented software. Gridstore software architecture delivers native Windows integration, per-VM I/O control, and elastic and independent scaling of resources. Benefits include easy deployment, predictable and controllable high performance, scaling that fits your needs, and up to 50% lower TCO. Headquartered in Mountain View, CA. its products and services are available through a global network of value-added resellers.

read more


by via Latest News from JAVA Developer's Journal

Announcing @VicomComputer to Exhibit at @CloudExpo New York [#Cloud]

SYS-CON Events announced today that Vicom Computer Services, Inc., a provider of technology and service solutions, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. They are located at booth #427. Vicom Computer Services, Inc. is a progressive leader in the technology industry for over 30 years. Headquartered in the NY Metropolitan area. Vicom provides products and services based on today’s requirements around Unified Networks, Cloud Computing strategies, Virtualization around Software defined Data Centers including, Servers, Storage, IT Management, Data Protection/Availability, Security, Infrastructure Services and IT Consulting Services.

read more


by via Latest News from JAVA Developer's Journal

What are the benefits of Node.js?

By Kenneth Peeples

What is Node.js?
Ryan Dahl, and other developers, at Joyent created Node.js. Node.js is an open source, cross-platform runtime environment for server-side and networking applications. It brings event-driven programming to web servers enabling development of fast web servers in Javascript.

In an event-driven application, there is a main loop that listens for events, and then triggers a callback function when one of those events is detected. Node.js also provides a non-blocking I/O API that optimizes an application’s throughput and scalability. In a non-blocking language, commands execute in parallel, and use callbacks to signal completion. In a blocking language, commands execute only after the previous command has completed.

Node.js uses the Google V8 JavaScript engine to execute code, and a large percentage of the basic modules are written in JavaScript. Node.js contains a built-in library to allow applications to act as a Web server without software such as Apache HTTP Server or IIS.

NPM is the pre-installed package manager for the Node.js server platform. It is used to install Node.js programs from the npm registry. The package manager allows publishing and sharing of open-source Node.js libraries by the community, and simplifies installation, updating and un-installation of libraries.

What are some of the Benefits of Node.js?

1. Asynchronous I/O

It’s built to handle asynchronous I/O from the ground up and is a good match to a lot of common web- and network-development problems. In addition to fast JavaScript execution, the real magic behind Node.js is called the Event Loop. To scale to large volumes of clients, all I/O intensive operations in Node.js are performed asynchronously.

2. Javascript

Node.js is Javascript. So the same language can be used on the backend and frontend. This means it breaks down the boundaries between front- and back-end development.

3. Community Driven

In addition to it’s innate capabilities, Node.js has a thriving open source community which has produced many excellent modules to add additional capabilities to Node.js applications. One of the most famous is Socket.io, a module to manage persistent connections between client and server, enabling the server to push real-time updates to clients. Socket.io abstracts the technology used to maintain these connections away from the developer, automatically using the best technology available for a particular client (websockets if the browser supports it, JSONP or Ajax longpolling if not).


References:
https://blog.udemy.com/learn-node-js/
http://pettergraff.blogspot.com/2013/01/why-node.html

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

Introducing the latest Hawkular component

By unknown

I’m pleased to introduce a new component to the Hawkular family aimed at delivering a Business Transaction Management solution. The initial focus will be on tracing a business transaction across a range of resources, on-premises and in the cloud, to provide an end to end view helping to isolate business transaction failures and performance bottlenecks.
At this stage we are still putting together high level plans and hope to start publishing more details of a roadmap over the coming weeks.
If you have requirements in this area, please feel free to create a jira, or discuss them on our irc channel or dev mailing list.

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

[email protected] Launches Ad Campaign on ‘Cloud Computing Journal’ | @CloudExpo [#Cloud]

SYS-CON Media announced today that Unitrends, cloud-empowered all-in-one continuity solutions that increase your IT confidence, has launched ad campaigns on SYS-CON's i-Technology sites, which include Cloud Computing Journal, DevOps Journal, Virtualization Journal, and IoT Journal. SYS-CON Media's interactive programs with an average 47 million pages views per month, have proven to be one of the most effective lead-generating tools for our advertising partners. With 1.2 million qualified IT professionals across SYS-CON's network of i-Technology sites, your company will have access to a multitude of influential enterprise development managers and decision makers in the marketplace that you're not currently reaching. These packages will put you in touch with your best customers and deliver the reach, impact and visibility necessary to stay competitive in today's market.

read more


by via Latest News from JAVA Developer's Journal

NASDAQ: EIGI Research Report

EIGI is a technology company that provides cloud-based solutions for small business including Domain Registration, Domain Hosting, Shared and ...
by via https://www.google.com/url?rct=j&sa=t&url=https://www.warriortradingnews.com/endurance-international-nasdaq-eigi-drops-on-research-report/9748&ct=ga&cd=CAIyGTMzMjUxYTFmZTllY2NhZjY6Y2E6ZW46Q0E&usg=AFQjCNEgYYN9K_Rsj90tvy-1cCCz15-YuA

Quickstart example with Dockerized Teiid

By Ramesh Reddy

Container based deployments for any services seemed to be picking up stream in every IT department, so to keep up with pace, Teiid released its Docker image for its 8.10.Final version.

In this article, I will showcase same quick start example from Teiid, but using the docker based images. if you interested in that please read the complete article at https://developer.jboss.org/docs/DOC-53281

Thanks

Ramesh..

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

Microservices and APM By @Ruxit | @DevOpsSummit [#DevOps #Microservices]

This digest provides an overview of good resources that are well worth reading. We’ll be updating this page as new content becomes available, so I suggest you bookmark it. Also, expect more digests to come on different topics that make all of our IT-hearts go boom!

read more


by via Latest News from JAVA Developer's Journal

How The Arrival Of gTLDs Could Shape The Future

The .company TLD was purchased by Donuts, a domain registry startup ... It is still not uncommon for businesses to register domains and host ...
by via https://www.google.com/url?rct=j&sa=t&url=http://www.domaininformer.com/guides/General_Information/articles/150429-How-The-Arrival-gTLDs-Could-Shape-Future&ct=ga&cd=CAIyGTMzMjUxYTFmZTllY2NhZjY6Y2E6ZW46Q0E&usg=AFQjCNHazY-J5j4NH8J4GVYYrO-IWekSjA

Slides from Online PEX Webinar – A Guide to Modern BPM Tools

By Eric D. Schabell

As mentioned previously, today I presented a webinar Red Hat JBoss BPM Suite based webinar to show you everything you would want to know about our JBoss BPM Travel Agency solution.

This event was hosted by PEX Process Excellence Network and you should be able to view a recording here.

I hope you enjoyed the tour of our JBoss BPM Suite Travel Agency and that you are now able to understand the benefits of using JBoss Automate solutions.

A Guide to Modern BPM Tools

All across the Internet you will find references to solutions, offerings, and products that try to align with business process management (BPM) solutions.

Whether you’re a Business Analyst or in IT strategy, this webinar will illustrate how easy it is to model and automate business processes with modern BPM tools in the travel industry.

If you are talking to an airline, a baggage handler, a bookings agency or anyone in between, they all have one thing in common. They are dealing with complex business processes that often need to combine rules, events, resource planning, and processes.

You’ll take a deep look into a sample solution for this industry, simulating a travel agency booking system with:

  • Service integration
  • Multiple tasks
  • Complex BPM elements and
  • Rule-based fraud detection for payment processing

You will leave with an advanced overview of the capabilities of the Red Hat® JBoss® BPM Suite.

Feel free to reach out if you have questions or comments, I always make time to chat with interested users.

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

Announcing @LAutomation to Exhibit at @CloudExpo New York [#IoT #Cloud]

SYS-CON Events announced today that Litmus Automation will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Litmus Automation’s vision is to provide a solution for companies that are in a rush to embrace the disruptive Internet of Things technology and leverage it for real business challenges. Litmus Automation simplifies the complexity of connected devices applications with Loop, a secure and scalable cloud platform.

read more


by via Latest News from JAVA Developer's Journal

The API Briefing: Top Five Findings for API Developers from Pew Research Center

A keyboard key has a green button with a bar chart on it.

stevanovicigor/iStock/Thinkstock

The Pew Research Center just released a report on how Americans view open government data. The following findings were based on a November to December 2014 survey of 3,212 adults.

  1. Two-thirds of Americans use the Internet or an app to connect with the government.
    According to Pew, 37% use the Internet to connect with the federal government, 34% connect with their state government, and 32% connect with their local government. It would be interesting to see a further breakout of users who use an app or mobile devices versus a desktop or laptop computer and Internet browser.
  2. The most popular reason for connecting with the government is to find information about government recreational activities.
    Twenty-seven percent of users searched for information about recreation while close behind with 18% is users who renewed their driver licenses or auto registrations. The third most popular reason (13%) is to learn more about and/or apply for government benefits.
  3. Most Americans do not realize that they are using government data when they use apps.
    A large majority of users consistently refer to apps that provide weather information or directional information. These apps rely on data provided by the National Weather Service or the federal government’s Geographical Positioning System (GPS). Often the government data is mixed with proprietary information or services that obscure the origins of the essential data provided by the government.
  4. Few Americans think that the government shares data effectively.
    Only 5% of respondents agreed that the federal government and state governments are very effective in sharing data. Local government does slightly better with 7% agreeing that local government data sharing is very effective. Over half of the respondents state that the federal government does an ineffective job in sharing data. In contrast, just under half of respondents agree that state and local governments are also ineffective in sharing data.
  5. Roughly half of Americans think that open data will improve government.
    That is good news, but that also means that half of the respondents think that open data will not improve government. Specifically, 49% think that open data will improve government services while an equal percentage think that open data will have no effect on the delivery of government services.

So, the good news is that most Americans use the Internet or apps to access government information, and many are comfortable conducting online transactions with the government. Government data is also effectively being used by third party developers to create widely-adopted apps and fuel the app economy. The Pew report demonstrates through past surveys that Americans are increasingly becoming more comfortable transacting business with the government through the Internet and apps.

Even so, as the last two findings show, there is much more work to convince Americans of the promise of government open data. Local and state government seem to be more effective with open data apps so maybe they can help improve federal apps. It would also be helpful to remind Americans how government open data fuels their favorite commercial apps. Finally, app developers can learn best practices from app developers who create government recreational information apps and state/local government apps that provide common government services such as renewing drivers’ licenses.

The Pew survey demonstrates that government apps have come a long way, but there are still many challenges ahead. I am certain that government open data providers and app developers will fulfill the still unrealized potential of government open data and apps.

*API – Application Programming Interface; how software programs and databases share data and functions with each other. Check out APIs in Government for more information.

Each week, “The API Briefing” will showcase government APIs and the latest API news and trends. Visit this column every week to learn how government APIs are transforming government and improving government services for the American people. If you have ideas for a topic or have questions about APIs, please contact me via email. All opinions are my own and do not reflect the opinions of the USDA and GSA.


by Bill Brantley via DigitalGov

Devoxx4Kids CFP at Red Hat Summit and DevNation

By Markus Eisele

Devoxx4Kids Logo

Red Hat is hosting a Devoxx4Kids event that will invite technology educators and kids together on Sunday, Jun 21 in Boston, MA.

This is an opportunity for developers and educators who would like to give a 2-hour hands-on workshop to kids from 6-16 years old. Presenters will need to arrange all the software and hardware required for the lab, except laptops which will be provided.

Submit your workshop now and learn more about suggested topics.

We have a limited capacity and looking forward to your submissions. You’ve until May 7th to submit your workshops.

If you’ve submitted talks for the main conference, then this would be a great opportunity to bring your kids. They can either attend the workshop, or even deliver a workshop. Young presenters are always very inspiring!

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

Integration Testing JBoss Fuse 6.x With Pax Exam, Part I

By christian posta

no test, no beer

JBoss Fuse is a powerful distributed integration platform with built in features for centralized configuration management, service discovery, versioning, API gateway, load balancing, failover, etc for your integration-focused microservice deployments. JBoss Fuse 6.x is built on top of the Fabric8 1.x opensource project. This blog is the first part in a two-part series on integration testing when building integration microservices on top of JBoss Fuse.

Honestly, I’m pleasantly surprised these days when people ask about the details of a testing strategy for the software/services they’re writing. I figured everyone agrees testing is important but that nobody actually does it. I do a lot of work with customers that use JBoss Fuse to write their integration services, and I often get asked how best to go about testing those services.

JBoss Fuse uses Apache Camel as its routing and mediation engine, and you end up writing the bulk of your integration logic with Camel. For testing Camel routes, I highly recommend using the built-in test framework that Camel ships with. Moreover, not only do I recommend using the built in test kit, I highly recommend you build the bulk of your tests with it. Being able to run camel and its associated tests outside of a container is a very important distinction with other integration solutions, and testing should take full advantage of that fact.

However, what if you have good Camel route test coverage and now you want to take a step further? You want to deploy your routes/applications into the JBoss Fuse container and verify everything was wired correctly, that OSGI imports/exports/metadata was included correctly, that services attached to the HTTP service, etc. These are legitimate reasons to want to deploy to a container, but doing this manually is error prone and slow. So what options are there for automating this?

I’ve run across a couple different ways to do this: using Arquillian which is a container-agnostic integration testing framework originally developed for JBoss Application Server/Wilfly/EAP. There are some good modules for integration testing your OSGI deployments. However, once you are trying to do more “black box” integration testing, Arquillian is not powerful enough at the moment for JBoss Fuse testing. For this, I’d recommend the Pax Exam project. Pax Exam has been around for quite a while and has been used to test the various derivatives of ServiceMix/Karaf which are similar enough to JBoss Fuse for testing purposes.

So, in an effort to not only help out others wanting to get started with Pax Exam for integration testing JBoss Fuse 6.x, I’ve put together a getting started primer… and more selfishly … so that I can jot down these notes so that I can come back to them; as I’ve already done this enough times and forgot it that it’s time to write it down.

itests

I typically build out automated integration tests along with the project I’m going to test in a submodule named itests. You can feel free to do the same, or put your integration tests into a separate project. For this guide, I’ve built the integration tests into the Rider Auto OSGI sample project that is adapted from Claus Ibsen and Jon Anstey‘s book Camel in Action. Feel free to browse that project to get a feel for what the modules do.

To get started, I highly recommend you take a browse of the Pax Exam documentation and then poke your head into the file named FuseTestSupport. In it, you’ll see the method that contributes the @Configuration of the OSGI container:

Note, that we’re using the actual distribution of JBoss Fuse, not some hacked-0together version. For this to work, you need to go to the JBoss.org website, download Fuse, and install it into your maven repository corresponding to the coordinates specified in the above code snippet, to wit, something like this: ~/.m2/repository/org/jboss/fuse/jboss-fuse-minimal/6.1.0.redhat-379/ Now when the test runs, it will find the Fuse disto.

You can also take a look at the configuration options, including editing some of the out of the box configuration options, adding features, altering the log level, etc. You can take a look at the KarafDistributionOption documentation or the CoreOptions that detail all of the available options.

This part is fairly straight forward. Here’s an example of a simple test that’s built on top of that configuration:

This test actually gets injected into the container (see the pax exam docs for more on that) and can access the internals of the container (eg, dependency injection) and run some asserts based on the internals of your deployment.

black box testing

Being able to run your automated integration tests in such a way that gives complete access to your deployment and to the container runtime is great. You can do sophisticated tests to make sure everything deployed correctly, that configuration was applied the way you thought, and that you can retrieve all of the services you expect. But another type of test is very useful: being able to deploy your integration services and remotely (outside the container) exercise the functionality without knowing much about the details. So for example, interacting with the interfaces exposed by the integration service like JMS, the file system, REST/SOAP endpoints, etc. You can use standard libraries for accessing these interfaces. But how do you expose the Fuse container as a black box for this type of testing? The answer is Pax Exam allows you to run your container in “server” mode. The unfortunate part is that it’s exposed as an API that you can use to orchestrate a “server” mode container. But a better way, if you’re a maven user, is to attach to the integration-test lifecycle and let maven boot up and tear down the server.

Luckily, the Pax Exam project also includes a maven plugin that plugs into the maven lifecycle integration testing phases

For example, include this in your pom.xml:

Please take a look at the entire pom.xml which shows how you can break things up into maven profiles and attach to the Maven failsafe plugin for integration testing.

supporting services

So far, Pax Exam is doing a lot of heavy lifting for running our automated integration tests with JBoss Fuse. However, what if we want to attach additional services to the bootstrap of the container? Maybe we want to initiate an instance of ActiveMQ before the container comes up (since maybe we have services that will need to attach to an external ActiveMQ… and we can then use the results of messages in the queues/DLQs to assert behavior, etc), and make sure to tear it down at the end of a test. You can [extend one of the different Pax Exam reactors] to do just that:

And then in your test, when you specify a reactor strategy to use, use our custom one:

fuse fabric

This post covers writing integration tests against stand alone versions of Fuse. A lot of the same mechanics will be used to create integration tests against a Fuse Fabric/Fabric8 deployment as well. That will be coming in Part 2 of this post. Stay tuned! Also follow me on twitter @christianposta for tweets about Fuse/Fabric8/Microservices/DevOps, etc and updates on new blog posts!

<a class="colorbox" rel="nofollow" href="http://blog.christianposta.com/testing/integration-testing-jboss-fuse-6-x-with-pax-exam-part-i/">Integration Testing JBoss Fuse 6.x With Pax Exam, Part I</a> was originally published by Christian Posta at <a class="colorbox" rel="nofollow" href="http://blog.christianposta.com/">Software Blog</a> on April 29, 2015.</p> <p>Source:: <a href="http://blog.christianposta.com/testing/integration-testing-jboss-fuse-6-x-with-pax-exam-part-i/" class="colorbox" title="Integration Testing JBoss Fuse 6.x With Pax Exam, Part I" rel="nofollow"></a>


by i88.ca via Social Marketing by I88.CA » i88.ca

Apr 28, 2015

jBPM on Red Hat Summit / DevNation

By Kris Verlaenen

From June 21st – 26th, Boston will be the place to be for the
This year, I’ll be presenting two sessions on Summit:
Wednesday, June 24 (10:40 am – 11:40 am)
Enabling business users to update their applications and processes is an integral part of business automation. Doing so requires rich client web technology and a powerful workbench to customize and extend business rules management (BRM) and business process management (BPM) solutions.

Red Hat JBoss BPM Suite is a flexible and powerful BPM platform, offering business process modeling, execution, and monitoring capabilities for numerous use cases. It can be used in different environments, and, as a result, the platform can be integrated in multiple architectures and configured in detail. The platform can be customized to provide customer-specific enhancements.

In this session, you will:

  • View a live process-driven application demo.
  • Discover the top technical things you need to know about the latest version of JBoss BPM Suite.
  • Get answers to some of the most asked questions.
  • Learn the truth about BPM myths.
  • Find out what’s next for JBoss BPM Suite.
Continuously improve your processes with Red Hat JBoss BPM SuiteKris Verlaenen — jBPM Project Lead, Red Hat
Thursday, June 25 (1:20 pm – 2:20 pm)
Business process management (BPM) lets your business operate smoothly and in a controlled manner. But to get the results you want, you have to be willing to continuously improve your processes. Join us to see how jBPM and Red Hat JBoss BPM Suite help you continually improve your processes.

We will explain and demo how to:

  • Collaborate on designing processes.
  • Manage your processes using multiple repositories and projects.
  • Promote business assets (from development to production).
  • Execute different versions of your processes in parallel spaces.
  • Perform process instance migration.
  • Implement a new functionality as a process.
[Credits for this proposal go out to Maciej, who did most of the work]
I won’t be presenting on DevNation this year, but I’ll definitely be around as well, for some late night coding and if necessary some beers :) Let me know if you’re planning to attend and would like to meet up at some point !
There will be numerous other interesting Summit presentations where jBPM will be involved as well, for example:
And a lab as well, on integration with Fuse:

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

Chef and Canonical Partner | @Chef @Canonical @DevOpsSummit [#DevOps]

Chef and Canonical announced a partnership to integrate and distribute Chef with Ubuntu. Canonical is integrating the Chef automation platform with Canonical's Machine-As-A-Service (MAAS), enabling users to automate the provisioning, configuration and deployment of bare metal compute resources in the data center. Canonical is packaging Chef 12 server in upcoming distributions of its Ubuntu open source operating system and will provide commercial support for Chef within its user base.

read more


by via Latest News from JAVA Developer's Journal

Streak’s Top 6 Tips for App Engine

By GCP Team

When Streak — CRM in your inbox — launched in March 2012, our userbase grew 30% every week for four consecutive months. Today, Streak supports millions of users with only 1.5 back-end engineers. We chose Google App Engine to power our application because it enabled our team to build features fast and scaled with user growth. Plus, we didn’t have to worry about infrastructure.
Caption: Streak’s data growth
Here are six tips we’ve learned building on App Engine, and if you’d like even more detail – including an overview of our app’s architecture and 15 minutes of Q&A – you can check out my webinar.
1. Keep user-facing GET requests fast
This tip isn’t specific to App Engine, as it really applies to most web applications. User-facing GET requests should be quick. App Engine has a 60 second timeout on all requests; frankly, if the total latency after a user interaction is taking longer than 200ms, users will perceive your app as slow. To keep requests fast, you should do your heavyweight processing – such as calculations or complex queries – either in the background or at write time. That way, when the user requests data (read time), it’s already precalculated and ready to go.
2. Take advantage of Managed VMs
So, what are Managed VMs? Managed VMs are a new hosting environment for App Engine, enabling you to take advantage of beefier compute resources and run your own custom runtimes. For example, we host our back-end data processing modules on n1-standard-1 machines (1 CPU and 3.75 GB mem), rather than App Engine frontend instances. This provides better performance and cost savings, due to sustained use discounts. Yes, Managed VMs take a littler longer to boot up than an App Engine frontend instance, but they are perfect for our background processing needs.
3. Denormalize for faster reads
Cloud Datastore is a NoSQL database so if you’re coming from the RDBMS world, it requires a different approach to data modeling. You have to be comfortable denormalizing and duplicating data, since SQL joins don’t exist. While data duplication might feel uncomfortable, by doing so, your reads will be very fast.
4. Break your application into modules
Modules make it easy for you to break your App Engine app into different components. For example, you could have a module for your user-facing traffic and one for background processing. Each module has its own yaml file, so you can set parameters such as instance size, version number, runtime language used, and more. As mentioned above, our backend modules take advantage of Managed VMs for performance/cost benefits, while our frontend module uses App Engine frontend instances that scale quicker. The documentation discusses best practices on how you should structure your app.
5. Deploy aggressively and use traffic splitting
At Streak, we do continuous deployments because versioning, deployment, and rollout is easy with App Engine. In fact, sometimes we deploy up to 20 times per day to get changes into the hands of customers. We aggressively deploy to many production versions of our app and then selectively turn on new features for our users. As we slowly ramp up the traffic to these new versions via traffic splitting, we catch issues early and often. These are usually really easy to deal with because each of our new code deploys has a small set of functionality, so its easy to find the relevant issues in the code base. We also use Google Cloud Monitoring and our own homegrown system (based on #6 below) to monitor these deploys for changes.
6. Use BigQuery to analyze your log files
Application and request logs can give you valuable insights into performance and help you make product improvements. If you’re just starting out, the log viewer’s list of recent requests will be just fine, but once you’ve reached scale you’ll want to do analysis on aggregate data or a specific user’s requests. We’ve built custom code to export our logs to Google BigQuery, but you can now stream your logs directly from the Developers Console. With these insights, my team can build a better user experience.
Watch the webinar
App Engine has been critical to our success. As our application has scaled, so has App Engine and we’ve been able to focus on building features for our customers, rather than ops. To learn more tips about App Engine – including an overview of our architecture and 15 minutes of Q&A – watch the full webinar.


-Aleem Mawani, CEO and co-founder, Streak

Source::


by i88.ca via Social Marketing by I88.CA » i88.ca

Gambling problem: Growth slows for Playtech

Playtech is considered a proxy for the global betting industry, with its business booming alongside the growth of online gambling around th...