Heroku Scalability Simplified

What is Scalability?

Scalability is a general software concept. It means that a system is built in a way that can be easily grown in regards to usage (example: more people accessing a site) or adding new features to the site. It is the ability the software has to continue to properly function as its context changes.

In Heroku, it refers to the system being able to cope with more users or more traffic.

Heroku Scalability

Heroku provides easy-to-use tools that enable developers to scale dynos (Heroku app containers) instantly to meet demand. After an app is deployed, it may require adjustments in response to things like increased traffic, new functionalities, or business scale. You can scale using the Heroku Dashboard or Heroku CLI.

Heroku Dashboard:

The Heroku dashboard is the web user interface for Heroku’s core features and functionality. It enables programmers to manage their apps, add-ons, deployment processes, metrics, and much more. It provides a simple slider interface for scaling dynos and you see the results immediately reflected in your dyno formation.

Heroku CLI:

Developers can also oversee their dyno formation using the Heroku command-line interface (CLI). This allows you to create and manage apps from the shell of various operating systems. Through a simple command, you can enlarge the number of web and worker dynos, or change the dyno type of any number of dynos at once.

Scalability Resources

Scalability resources fall into two broad categories: horizontal and vertical. Additionally, Heroku offers a third type called autoscaling. 

Scaling horizontally: adding more dynos

Adding more dynos of a given dyno process type scales your application horizontally. For example, adding more web dynos lets Heroku route incoming HTTP requests across more running instances of your web servers, which will typically improve performance for a higher traffic volume. 

Also, adding more worker dynos allows your app to process more jobs in parallel and handle a larger volume of jobs. There are some cases where scaling horizontally won’t help, such as bottlenecks on the backend services and long requests or jobs.

Scaling vertically: upgrading to larger dynos

Upgrading dynos to larger dyno types will provide your app with more memory and CPU resources. 

All the dynos are isolated. But, apps running on Free, Hobby, and Standard dynos may share an essential compute instance —they are multi-tenant—and consequently may encounter some degree of performance variance. Performance dynos and those that run in Heroku Private Spaces do not share an underlying compute instance with other dynos, so they experience low variability in performance.

Autoscaling

Heroku enables you to automatically increase the number of web dynos needed to meet the specified 95% response time threshold. Based on your app’s existing throughput, the autoscaling feature removes the need to anticipate traffic spikes. Autoscaling is included for free on Performance and Private dynos.

Heroku makes it super easy for developers to scale any number of device integrations individually. Developers can simply provision more dynos to handle increased traffic coming in from a specific device.

If Autoscaling doesn’t cover your needs or is not working as expected for your apps, Heroku recommends trying the Rails Auto Scale add-on or Adept Scale add-on.

Scalability Simplified, with Examples

Horizontal Scalability

If you have a store that serves clients and you expect 10 clients at the time, you will hire 2 employees to provide customer service. Suddenly, your store becomes popular and 100 clients arrive at the same time. Those 2 employees won’t be enough to meet their needs. You will need to scale horizontally, employing more people to serve more clients. 

The store works like your app, website, mobile application, system, etc. The site was prepared for 10 users connected at the time working with 2 web dynos. Suddenly, your site has 100 users loading the site at the same time. To scale horizontally in Heroku, you can add more web dynos. The incoming HTTP requests can be linked to more clients arriving at the store, waiting to be served. 

If your store is a restaurant, the people serving the clients would be the waitress. However, restaurants not only can satisfy customer needs only with waitresses; there are also employees working behind the counter in the kitchen. Even if you hire 100 waitresses, if you have only one person cooking, that person won’t be able to cook for 100 people. You will need more people working in the kitchen. 

Kitchen employees are the worker dynos in Heroku, they are in charge of doing background jobs. If you have more, they can cook in parallel. The same happens with worker dynos

Vertical Scalability

Horizontal scalability might not always be enough. Continuing with the restaurant example, if the restaurant premises are too small, even if you hire 100 waitresses and 10 people in the kitchen, you won’t be able to fit enough tables. You will need to increase your capacity. You will need to scale vertically

Another example that can be related to Heroku would be if your chef only knows how to prepare juices or if your waitress only speaks one language. Those cooks won’t be able to add more things to your menu and that waitress won’t allow you to serve clients that speak other languages. In this case, the issue won’t be solved by adding more waitresses or cooks, you will need to employ people that have the skills that you need. In this case, you will need to scale vertically.

Autoscaling

So, what would happen if the demand in your restaurant is variable? 

It wouldn’t be cost-effective to pay 100 waitresses full-time. However, you will need employees who can be hired with short notice when more people arrive at the restaurant. You will need full-time employees and people you call in emergencies. This is what Heroku offers with its autoscaling system. Developers can set up 10 dynos, and add additional dynos when the response time is slow. When the workload increases, Heroku automatically provides the extra dynos needed to cope with the demand. Once the demand decreases, it automatically disables it, allowing you to save money. 

Heroku offers a practical step-by-step guide showing you how to scale your dyno formation easily. If you want to learn more about Heroku, click here, you will find useful information and three ways it enhances cloud infrastructure. 

The Simpsons, Legacy Code, and Maintainability for Salesforce Code

As soon as we start to code, we are taught many best practices, tricks, design patterns, and a huge “etc.” But it’s not until our first gray hair that we start to understand why all those best practices exist and why it’s important to apply them. I wanna write about my experience with legacy code and what worked for me with some examples. 

First, let’s discuss what legacy code is, and compare different degrees of code quality with one of the most famous and beloved families ever.

Legacy Code and The Simpsons

When we hear about legacy code, the first thing we think is “old code”. So let’s start our analysis with the oldest member of the family.

 Abraham Simpson (aka “Grandpa”)

Legacy sounds like something “old” that’s with us without us wanting it, just like the Simpsons family and Grandpa. He has aging problems (his memory is gone, he lost a kidney, he’s a little crazy…). He lives in a nursing home and his family tries to avoid him. For sure Grandpa is legacy code and any developer would prefer to resign rather than work on it. Grandpa is probably an old COBOL project the company avoids changing and migrating to newer technologies. The reason? Because it’s going to be really expensive and the entire business depends on it.

Homer J. Simpson

He is middle-aged. For some people he is old, for others, he is not (it depends how close in age the reader is). But he is dumb, he makes a lot of mistakes all the time. He drives under the influence, he is lazy and he has no idea what he’s doing at work and a BIG etc. He had some chapters where he tried to improve himself, but he wasn’t able to do it. If he were code, you wouldn’t be pleased to have to change it. I imagine Homer as a 15-year old Java application that is causing a lot of problems, but we are still trying to maintain it because we love it (because everyone loves Homer).

Bart Simpson

He is really young, 10 years old. He should be a nice kid who helps the elderly cross the street safely, but, he is always instead causing trouble for everyone. Marge needs to keep an eye on him every single second. If Bart is code, he is a React/Node.js/MongoDB application done by a startup with low budget and limited time to develop an MVP to secure an angel investor. He lacks testing, documentation, and has a lot of tech debt.

Lisa Simpson

Nice, smart, behaves well, never gets into trouble…in a few words, the daughter anyone would love. She tries to improve herself every single second. If Lisa were code, she would have a very good architecture, awesome code quality, comprehensive test suite, cool documentation, and good CI/CD. But there aren’t that many Lisas in the real world.

Marge Simpson

Finally, Marge, the mom of the family. She is some years younger than Homer. She tries to be good, support her family, and be a good citizen…but she has been in trouble, too. I would say she is the average project we work on. It has parts with good quality, parts with poor quality. Sometimes we make mistakes because of the coding quality and some tech debt that we have created knowing that we are going to pay a tax on it later.

So, what is legacy code?

In “Working Effectively with Legacy Code” (ISBN-9787111466253) Michael C. Feathers defines it as “code without a unit test.” I agree that code without “unit test” is legacy code, but I don’t consider that only having unit tests translates to your code being a modern “Lisa” program. I prefer to consider legacy code anything in which making any change is hard, costly, and risky because the maintainability is not high enough for different reasons that include: 

  1. Not having a comprehensive test suite
  2. Poor documentation
  3. Low code quality 

I am pretty sure that while you are reading this, more than one of you is writing legacy code because legacy does not mean “old.”

Maintainability 

According to IEEE, maintainability is:

“The ease with which a software system or component can be modified to correct faults, improve performance or other attributes, or adapt to a changing environment.” 

So let’s discuss some topics that can make our code more maintainable.

Automated Testing

This is a must. We cannot change anything with any degree of trust without having a good automated test suite to support us. 

Salesforce has a really good API to help developers with “unit testing” (the tests we mostly write in Salesforce are integration as we are interacting with a database instead of mocking it). Also, Salesforce requires us to have 75% of code coverage over all of the org to be able to promote to production, which is great…but is it enough?

For example, imagine you have this code:

 

 

This test passes, 100% code coverage. But another developer calls a new AccountCreator().insertNewAccount('') and the user sees an exception. Chances are the developer who created this class was doing test-driven development (TDD) but didn’t take time to think about the business restrictions (account name cannot be null or empty). There is no test checking that the code is doing this validation.

So having high code coverage is not enough, we have to make sure that we have covered all possible scenarios (happy paths, failures paths, border cases…all of them).

Documentation

We write code that lives for years and once in a while a person has to make a change. This person won’t be the same person as the one who coded it – even if they share the same ID number, phone number, or Instagram username, the programmer from the future probably has forgotten some aspect of the code. So we need some degree of documentation in our classes. 

I wanna discuss the two main types of in-code documentation. First, let me show one example:

 

Let’s try to fix this with some tools:

*doc documentation (ApexDoc in the case of Apex)

A bunch of tools exists that allows us to put some special comments before classes and method declarations which can be parsed to generate documentation automatically. And the good part, you have the documentation in the same place as the code. In ApexDoc, the comment has to start with the /** instead of the regular /* Apex comment block and we have some tags to indicate what we are writing. Here’s an example: 

  • @description: A high-level human-readable and understandable description of what the method is doing. I prefer to put an easily understandable and quick to read first line about what the method is doing. A really important detail that the developer needs to know about the method like if a web service is being used, I put that in the next line (but not going very deeply into implementation details).
  • @param: For each parameter, the method takes, we have to indicate the type, preconditions, and a description of what it is intended for.
  • @returns: The returning type and description (if the method is not void).
  • @throws: The exception it could throw and when it would do it.

The last topic about this is where the ApexDoc should be present. There are two approaches to this:

  1. Public classes and methods
  2. Every class and method

The main advantage of putting documentation in public classes and methods is that after we shared our code with the world, changing the contract we defined is not so simple (without breaking other people’s code) so we can be quite sure that the ApexDoc does not become obsolete (yes, we need to maintain the ApexDoc, too). 

I prefer to put it everywhere and be sure the comments get updated when there are changes in the methods because private methods are code, too. They deserve respect and we will need to read them anywhere. We need to be kind to the next developer changing this code (especially because the next one could be ourselves of the future and I don’t want to aggravate them).

So now we have all our methods documented and we can skip reading that long and hard method that the piece of code I am working on calls (we go into details if and only if we need it).

Comments on implementation code

When I started to study computer science, a professor told me that comments are very important. 

Well, I completely disagree with that idea in real life. 

Code has to be self-descriptive and if we need to put a comment in it, we must be sure there is nothing else we can do to make the code more understandable without the comment. A comment line in the code is a new code line that needs to be maintained and it took development time to put it there. It also cannot be tested, so the chances are the comment ends up saying something that the code is not doing. Read the code carefully and tell me if you could find any discrepancies. 

We will tackle how to reduce comment quantity shortly, but first check really good code comments here on Stack Overflow: Best Comments in Source Code You Have Ever Encountered

I really loved this one: 

//When I wrote this, only God and I understood what I was doing 

//Now, God only knows

Code Quality

This is the last topic I wanna cover today and I will only scratch the surface. In particular, I want to discuss how we can write our code to communicate with other developers in a way that is easier, faster, and clearer. Code should be self-explanatory and we should hide the implementation details until it is not possible to anymore. Ideally, our public method should be written as a declaration of the developer’s intentions and we leave the implementation details (the imperative part) to private methods (but we should not chain many private methods calls as it forces the developer to go forward and backward and is not optimal).

Let’s start with the method I showed you in the last section:

 

 

In this method, we find three different parts:

  1. Collect data from parameters 
  2. Do business logic
  3. Save and return the result (technically a method should do only one thing and do it well…this does two, but it is part of the trade-off we have to make in real life)

So let’s start with the first part:

 

 

  1. We see the comment is wrong and useless so we delete it.
  2. SOQL query is already declarative (the query has Salesforce best practices issues, but I do not want to focus on this right now), so we can’t hide any implementation details.
  3. We have some variable declarations:
    • accountScoresToReturn: This variable is self-explanatory: we are going to return this list.
    • accountNames: Ok, they are account names but which account names??
    • accountNameToId: Another good name, we know what we are going to store in it.
  4. We have an iteration: This is a candidate to split into another method. Unfortunately, we can’t do it without impacting the overall performance or quality (we can’t return two values in Apex, so we need two methods with two iterations or a function that returns a string and has side-effects of populating the map). But reading this block we start to figure out what accountNames is for. We concatenate the accountNames with “,” between them, but we find out that the variable accountName is not expressive enough so we are going to look in the code to figure out what it is.

Second block:

 

 

This block mainly does two things:

  1. It calls a webservice and parses the result
  2. It gets the higher score accounts

And there are plenty of implementation details.

First, we tackle the invocation of the service. It requires the account name string and returns the parsed JSON object. So, let’s split this other method: 

 

 

And our main method changes to: 

 

 

Let’s tackle the second part of the business logic. Before we do this, do not forget to run the complete test suite we have. The business logic iterates over the parsedJson object and selects those that score higher than 5. So we are going to split it into another method which takes parsedJson and returns the List of selected accounts: 

 

 

And our main method changes to:  

 

 

Finally, we have the part of the code that returns data. Most of the time this part just returns a value or insert for the database.

Conclusion

One of the most important attributes of the work we do is the maintainability of the code we write. We must ensure we are using all the tools we have to enable this. There are hundreds more topics and tools to cover regarding how to create maintainable code, but everything starts with the three points I covered here: 

  1. Automated testing
  2. Documentation
  3. Code quality

Don’t forget you are going to spend more time reading the code than actually writing it. So you should take the time necessary to make it easier to read.

Coming back to our friends, we have Maggie:

Maggie is just a baby, she is learning the world. Everything she learns can lead her to become Lisa or Homer, just as the project you are working on right now. So, I hope these recommendations help your project to become Lisa and not Homer (even though we still love him).

If you’ve made it this far, thanks for reading! Also, if want to read more about code, check our latest articles here.

How do we leverage the Salesforce ConnectApi to build Tok?

To understand the use of the Salesforce ConnectApi, first we need to talk about programming languages. Whether we are experts developers, tech geeks, or an average human being who owns a computer, most of us are, at some level, aware of programs. But, what exactly are they? Well, to keep it simple we will say they are a system of vocabulary and commands that allow humans to communicate with computers. For those who are not familiar with this environment, remember that computers think in binary. They speak in 1s and 0s, so programming languages help us translate our instructions into a language that computers will understand. 

Apex: Salesforce’s programming language.

At some point, you may hear about Java, JavaScript, Python. If you are more experienced, you may also know Ruby, C#, C++ or Swift. All of them are programming languages. Now, what is the scenario when it comes to Salesforce? Well, there is a specific and powerful programming language just for us named Apex. Salesforce created this language to allow developers to run and customize on-demand apps within Salesforce. Salesforce describes it as: 

“Apex is a strongly typed, object-oriented programming language that allows developers to execute flow and transaction control statements on the Lightning Platform server, in conjunction with calls to the API”.

So, how is Apex directly related to us? Well, we build our solutions inside Salesforce, therefore we use Apex as a programming language. Every time we create a product to enhance your experience in the Salesforce environment, we use Apex. That’s what we did for Tok, our app that boosts Chatter capabilities to the next level to ensure real-time communication to our customers. 

What is Chatter REST API?

Let us go a little bit deeper. Shortly after the release of Chatter, the Salesforce enterprise collaboration tool, Salesforce launched Chatter REST API with plenty of messaging features and better integration between third-parties apps and Salesforce through HTTP requests. This way, it would be easier for a group of users to be notified about events. Salesforce designed Chatter API to enhance data delivery and treatment, especially across mobile apps. This API is powerful enough to display a feed on an external system, such as an intranet site after users are authenticated. Salesforce also recommends other uses: 

“Use Chatter REST API to display Chatter feeds, users, groups, and followers, especially in mobile applications. Chatter REST API also provides programmatic access to files, recommendations, topics, notifications, Data.com purchasing, and more. Chatter REST API is similar to APIs offered by other companies with feeds, such as Facebook and Twitter, but it also exposes Salesforce features beyond Chatter.”

With the Summer ‘20 release, the company launched a new version under the name of Connect REST API (Connect REST API Developer Guide – version 49.0). If you explore the Salesforce’s Guide, you will also notice that there is a preview of the next release (winter ‘21 version 50.0). 

ConnectApi: How does Tok leverage Chatter REST API methods and classes?

As you can see, Salesforce built Chatter REST API to establish bridges between Chatter and other independent platforms. But what happens with applications or other solutions that were built on the Salesforce platform like Tok? How does Tok take advantage of all the methods and classes Chatter REST API provides? Well, even though Chatter REST API is a powerful tool, we don’t use it directly. Instead, we use ConnectApi. 

Since Tok uses APEX, Salesforce’s programming language, ConnectApi (also, Connect in APEX) is the best way to access Chatter REST API data and capabilities. It allows our developers to work with feeds, users, groups, messages, and other classes or methods. Salesforce summarizes the power of ConectApi: 

“[Use ConectApi to] create Apex pages that display feeds, post feed items with mentions and topics, and update user and group photos. Create triggers that update Chatter feeds. Many Connect REST API resource actions are exposed as static methods on Apex classes in the ConnectApi namespace. These methods use other ConnectApi classes to input and return information. We refer to the ConnectApi namespace as Connect in Apex.”

Now that you understand how ConnectApi helps us to build Tok, all you need to do is relax. So, take a seat in front of your computer, grab a cup of coffee and experience the way ConnectApi empowers your communications through Tok. If you like this information but you also feel you need to dive deeper, we recommend this article: How Our New App, Tok, Can Improve Business Communication.

Our developers are constantly working to take full advantage of any new version. Haven’t tried Tok yet? Install it here.

Tok 30 day trial

How to identify problems in your code with the Salesforce CLI Scanner

Ok, let see what the Salesforce CLI Scanner is and what it can do for us. The tool is a plugin that uses multiple code analysis engines in various languages (including Apex). It currently inspects your code using the PMD rule engine and ESLint, however, they have plans to add support for more rule engines in the future. 

This means that the tool will help you identify potential problems, from inconsistent naming to security vulnerabilities and alert you to these problems with easy to understand results. You can run the scanner on-command in the CLI, or integrate it into your CI/CD framework to enforce rules and expect high-quality code. And you can run it against every code change.

It’s like having a code review in real-time. Following some rules the Salesforce CLI Scanner can identify issues and show them to the developers as feedback to fix the code, saving a lot of time and money.

This tool can be used by ISVs and Salesforce developers to prepare for security review processes. It can improve code quality by identifying a variety of performance and security issues in the development stage. 

Another possible use is to integrate this tool with a CI/CD process to regularly monitor your code’s health.

Remember that the Salesforce CLI Scanner plugin works on all operating systems that Salesforce CLI supports. It has a one-step installation that is quite easy and fast, and if you are uncertain about the command-line commands, it has a built-in help (–help).

They have recently added new functionality that allows you to scan Salesforce Lightning Web Components using ESlint, and it’s amazing to have that.

These are some of the key features of Scanner v2.3, released in October 2020.

Static analysis

Static analysis, also called static code analysis, is a computer program method for debugging. It is done by examining the code without executing the program. The process provides an understanding of the code structure and can help ensure that the code adheres to best practices and doesn’t fall into common antipatterns and coding issues that will have a negative impact on the performance or quality of the code. Based on rules, the engine reports the suggestions for upgrading your code.

Of course, currently there are a lot of static analysis tools, but most of them work only with one or just a few sets of languages, and the most common in Salesforce packages is the use of a variety of components created in different languages. A single static analyzer is insufficient to take into account all the rules, patterns, best practices, etc. Working with multiple static analyzer tools can quickly become a headache.

To solve this, we recommend the Salesforce CLI Scanner plug-in. This plug-in shows the most relevant information to help Salesforce developers improve to the code while providing a unified experience. 

It has a single and easy installation process with an intuitive set of commands to interact with multiple rule engines. You can define a unified set of rules that are checked by their respective rule engines and also get a report that includes all issues identified by all the engines.

What is PMD?

PMD is a very powerful open-source static analyzer that supports many languages. Additionally, it has a large community of developers building rules for Apex. By default, Salesforce CLI Scanner supports code written in Apex, VisualForce, Java, JavaScript, and TypeScript but you can easily extend it to support any language.

Here at Oktana, we have a set of custom rules to improve our code quality and help everyone to have better-structured code.

What is ESlint?

ESlint is an open-source JavaScript linting utility for identifying and reporting on patterns, with the goal of making code more consistent and avoiding bugs. 

Because it’s very flexible in how it parses JavaScript, it can handle many use-cases related to Lightning Web Components. Additionally, with the built-in Eslint-typescript plugin, the Salesforce CLI Scanner can also analyze TypeScript out-of-the-box.

What does the Salesforce CLI Scanner do for me?

By combining two static scanners in one tool, the Salesforce CLI Scanner allows you to detect a wide variety of problems in your code.

Example Apex issues include:

  • Performance issues (e.g. running SOQL or SOSL queries inside loops).
  • Security issues (e.g. basic sharing violations, simplified CRUD/FLS checks, CSRF, and XSS vulnerabilities)
  • Not clear code (variable names, comments in classes and methods, avoid long, and complicated methods)
  • Error-prone code (e.g. empty try/catch/finally blocks)

Example issues with Lightning Web Components include:

  • Unreachable code
  • Unused variables
  • Invalid regular expressions
  • Stylistic preferences such as enforcing semi-colon at the end of a statement and expecting function names in camelcase

Catching these issues early has several advantages:

  • You can fix code issues immediately and rerun Salesforce CLI Scanner right away to confirm that the issues have been resolved, saving time.
  • Ensure the new developers are following the best practices, best patterns, and avoiding common issues.
  • You can shorten the security review processes. These processes are faster and easier when most issues are identified and fixed before the review even begins.

Different report formats

Salesforce CLI Scanner has an array of useful reporting formats for different uses:

  • Simple table-style reporting to get feedback on code you are actively writing
  • CSV reporting for spreadsheet-based filtering and analytics
  • JSON and XML reporting to feed into other tools for further processing
  • HTML reporting for readable, searchable results
  • JUnit-style test failures to use with a CI/CD setup

Conclusion

Salesforce CLI Scanner is a tool that is here to stay. It has a lot of  Salesforce engineers actively implementing new features to further improve it. 

An upcoming feature is the ability to detect and warn of external code dependencies that may have security vulnerabilities. They are also in the process of adding new rules to identify more security issues.

You can get started with Salesforce CLI Scanner within minutes by following this link to improve your code quality and save time.

What are you waiting for? Go ahead and give this amazing tool a try, and leave your comments about how your experience was!

Learn more from our team here, or check out our services.

Heroku: Simplify and improve your cloud infrastructure

Data plays an enormous role in the success of any organization. Collecting and quantifying pertinent information builds a stronger roadmap for growth. Because of this, companies are collecting and storing data to forecast future trends and develop action plans. That’s a lot of data to manage and most companies don’t have the right technology in place. This is where the cloud sweeps in to save the day. Cloud platforms enable companies to store large volumes of data to repurpose for business transactions. As a result, cloud infrastructure demand is growing for businesses of all sizes, as they have come to realize the massive benefits and potential of utilizing cloud components. Heroku is a first-class platform that helps developers scale more effectively.

What is Heroku?

 

Heroku is a platform as a service (PaaS) cloud that supports several programming languages and is part of the Salesforce Platform. Because it supports the most relevant programming languages used in the industry, it has become a popular tool for enhancing cloud infrastructure. Developers, teams, and businesses of all sizes use Heroku to deploy, manage, and scale apps. Using bi-directional synchronization, Heroku unifies the data in your Heroku data with your Salesforce CRM data. Additionally, Salesforce Trailhead was built and launched on Heroku. 

3 ways Heroku enhances cloud infrastructure:

 

  • Several Programming Languages: Initially, Heroku only supported Ruby on Rails. However, over the years it expanded to include other languages such as, Java, Node.js, Scala, Clojure, Python, PHP, and Go. The benefit here is the ability it provides users to create applications that are robust and versatile.

 

  • Rapid Delivery:  Developers are able to deploy their code to Heroku with a one-line command in the terminal. Having all the power of Amazon Web Services (AWS) in the background, but without having to take care of setting up its infrastructure. Heroku has it all covered for you!. Access to all of these resources and capabilities massively cuts down project time and allows developers to focus on creativity and higher-level work.

 

  • Scalable Functionality: Heroku allows developers to construct while not having to sacrifice impressive UI and effective application functionality. Heroku is able to accommodate spikes and dips in traffic without having to purchase more hardware. The system is able to cope with higher loads of users or more traffic reaching the system. 

 

As you can see, Heroku has many features that simplify and improve cloud infrastructure at a very granular level. By utilizing the Heroku platform, developers are able to build applications that are efficient, visually pleasing, and all at a fraction of the time and cost it would take to develop on other platforms. 

Oktana’s Experience 

 

Here at Oktana, we are avid users of the platform and always seek out the best technologies to leverage for our customers. We’ve used Heroku to develop a number of outstanding applications across several different industries. Among them, you can find a Leading Investment Firm, a Fintech Company, and MedZed

We’ve seen firsthand the kind of power Heroku brings to drastically improve application development. That’s why we’re extremely proud to announce that we’ve recently received a Specialization Badge for outstanding Heroku development from Salesforce! We’re super excited to be recognized for our expertise and it motivates us to develop even more applications with the platform. We think it’s a great tool and highly recommend it to developers who really want to extend cloud infrastructure to the next level.

 

 

 

How to set a tamper-proof Salesforce session cookie with Apex

Imagine we have a website in which we want the user session to persist for a specific period, even when the user closes and reopens the browser. 

Our goal is to have operations/logic for a particular user persist on the website. Whenever the user returns to the site within a given amount of time using a session cookie.

There are a multitude of ways to store the session data. In this article, we are focusing on client-side storage (cookies). Cookies are small pieces of data that are transmitted from the server to the client (generally done once.) Then when the user comes back to the site, the cookies are sent back to the server. This allows us to track a single user across multiple connections to our site. 

Why should I care about the expiration?

Being able to easily expire user sessions allows for extra security measures. In our case, if we wanted to add a new feature that allows us to sign out of all sessions in other locations (machines where the user has logged in), this (expire session cookie) would force those other locations to re-authenticate before gaining access to the account.

This is a good security approach for when a user’s cookie is stolen or his credentials are compromised. Upon changing his password all his sessions are invalidated. An attack using an old cookie cannot continue to wreak havoc on the user’s account.

Signing your session cookie

As we know, all data stored on the client-side could potentially be compromised as a user can maliciously tamper with it. And since we are not able to avoid it, we might provide the server with the feature to recognize this manipulation. 

This feature consists of a cryptographically signed cookie. Upon receiving the cookie from the client, verify that the signature matches what you are expecting. 

HMAC (Hash-based message authentication code) is a cryptographic construct that uses a hashing algorithm (SHA-1, SHA-256, SHA-3) to create a MAC (message authentication code) with a secret key. Salesforce provides us with a class named Crypto that contains methods for creating digests, message authentication codes, and signatures, as well as encrypting and decrypting information. Click here to see more about Crypto.

Let’s code!

We are going to create a global helper class for signing session cookies. That we will be able to re-use in any part of our project.

Session-Cookies-with-apex-1

Within this class, we are using other Salesforce helper classes such as Blob (Contains methods for the Blob primitive data type) and EncodingUtil (to encode and decode URL strings, and convert strings to hexadecimal format).

This allows us to do the following:

Session-Cookies-with-apex-2

We can then send that to the client that requested the page. Once the client visits the next page, their browser will send that same cookie back to use. 

Assuming we need to store the ‘session ID’, we would get this as result:

Session-Cookies-with-apex-3
Session-Cookies-with-apex-4

Note: 

All cookies created from Salesforce contain ‘apex__’ as a prefix and are encoded before being set for the page.  Learn more about cookies here

To retrieve and verify the data was not tampered with, we could do the following:

retrieve-and-verify-data-apex

If the MAC is verified, it means that the data was not tampered with by the user and we can continue our business logic. 

Real Secret Key

For extra security measures, we might set an auxiliary CustomSetting for storing AES secret keys. We will create a Secret__c CustomSetting with a field named Base64HmacKey__c.

Now, we are going to refactor our code to include this new approach:

  • Add a new variable to our helper class
new-variable-helper-class
  • Add a new function to get the secret key from the CustomSetting
CustomSetting
  • Lastly, modify the private variable named SECRET_KEY
secret-key

Now we have better handling of our secret key.

Encryption

When using client-side storage, it may be beneficial to encrypt the data to add an extra layer of security. Even when encrypting the data, you need to continue using a MAC.

Using just encryption will not protect you against decrypting bad data because an attacker decided to provide invalid data. Signing the cookie data with a MAC makes sure that the attacker is not able to mess with the ciphertext.

If you’ve made it this far, thanks for reading! Also, if you are interested in Salesforce development go and check our latest articles here.

Salesforce Certification: Informatica Specialist

Okay, it’s not an official Salesforce certification, but hear us out.

Informatica enables you to migrate data from different data sources and transform data according to business requirements. Founded in 1993, at its core, the company is an enterprise-grade extract, transform, and load (ETL) tool used in building out your data warehouse. The company’s technologies are frequently used to migrate or connect data into Salesforce.

Salesforce + Informatica 

If Salesforce is the heart of your business but you’re only using it to manage your sales pipeline, it’s time to think bigger. 

  1. Your customer data needs to be integrated across all of your systems
  2. Your data needs to be clean and accurate
  3. Users need to be able to efficiently access the data whenever they need it

Salesforce enables all of this, empowering your business teams with direct access to data and reporting. Many companies store all of their data in Salesforce, but if you are working with legacy systems, Informatica can help you connect your data, wherever it lives, for a unified view of your business. You can integrate Informatica with many Salesforce products including:

Sales Cloud

Speed up sales cycles, increase agility, and reduce operating costs by synchronizing Sales Cloud with back-office systems such as SAP, Oracle EBS, Siebel, Microsoft SQL Server, Marketo, NetSuite, Workday, and many others. 

Service Cloud

Use Informatica to build efficient client service processes to update case data in real-time across Service Cloud, mobile and on-premises systems. Support agents follow process wizards that automate tasks, eliminate data entry errors and improve their time-to-resolution statistics.

Salesforce Platform

Build your custom Salesforce app more efficiently and safely applying best practices for Salesforce sandbox management. Informatica can help you to quickly create discrete and referentially intact test data sets. 

Analytics Cloud

Understand your customers with data sets populated into Salesforce with accurate, clean, and consistent data from your data warehouse.

Marketing Cloud

Enhance campaign ROI, improve marketing segmentation, and boost Salesforce Marketing Cloud reporting accuracy when you synchronize massive amounts of account, contact, and lead data between Salesforce Marketing Cloud data extensions, traditional enterprise systems, and enterprise data warehouses. 

Considerations for Salesforce and Informatica integration

Informatica has put together a list of best practices to consider for Salesforce data and application integration:

  1. What data and apps do you need to connect?
  2. How frequent do you need to sync your data – batch, real-time, or, do you opt for a hybrid approach that utilizes both batch and real-time?
  3. Who will maintain your integration, business users or developers?
  4. How will you maintain data quality?
  5. Are you prepared to maintain an intelligent data catalog to help provide order and visibility to the data available to your users, and manage who sees what?

Informatica Specialist Certifications

So, we’ve established this is not technically a Salesforce certification, but a complementary certification nonetheless.

Informatica certification is in heavy demand by enterprise companies who have large databases they would like to connect to Salesforce. If you work with Salesforce, this certification will expand on the methods you know to integrate Salesforce plus enable you to provide additional value to your internal teams or clients.

The Informatica Secure@Source Certification covers product installation, architecture, server management, configuration, discovery, dashboard, user access and activity, security policy and actions, and, anomaly detection. 

To understand a little bit more about this certification, we talked with Isidro, a member of our development team at Oktana. He told us this certification is oriented to consultants who want to be specialists in database administration. It helps you learn how to manage large databases in the cloud, and it doesn’t require code because it’s all at the configuration level. But, it does require prior knowledge of information security, database security, protocols and encryption.

In Isidro’s circumstance, he had prior knowledge of network protocols and monitoring, so he found that section was easier to understand. He recommends 10 to 15 hours of study to earn this certification.

If you are part of our team, Oktana will provide you with some extra resources to help you successfully pass the exam and continue to develop your career with Salesforce technologies. If you’re interested in joining our team, checkout Oktana Careers.

Salesforce Certification: MuleSoft Certified Developer

What is MuleSoft?

Acquired by Salesforce in 2018, MuleSoft is a SaaS company with a world-class industry presence. MuleSoft provides integration software to connect applications, data, and devices. At its core, it allows you to:

  • Efficiently build APIs
  • Manage your API users
  • Easily connect existing systems, regardless of the technologies used

From a developer perspective, MuleSoft streamlines the process of integrating various systems, whether that includes new APIs, Salesforce, ERP, or legacy applications. It is a unique technology in that you can program from a graphical abstraction and create flows with simple tools – a very different environment when it comes to software development.  

For the customer, MuleSoft means you can connect all corners of your system, including Salesforce, and also reduce development time when building new APIs. 

As the platform grows, MuleSoft is only becoming more powerful. 

  • With MuleSoft’s Anypoint Security, security and threat protection can be automated at every layer for ISO 27001, SOC 2, PCI DSS, and GDPR compliance. 
  • The MuleSoft Anypoint Platform™ allows themes to launch applications 3x faster and increase productivity by 300%.
  • API Community Manager enables MuleSoft customers to manage their API users in a community, leveraging Salesforce Community Cloud technology.

Over the years, our focus has been helping customers integrate their systems with Salesforce, whether through custom methods or integration with third-party services. We have encouraged our developers to achieve the MuleSoft certification. This has allowed us to partner with multiple companies that needed help integrating with MuleSoft. 

How one of our partners uses MuleSoft

We recently completed a MuleSoft integration that allowed new data to be entered into a field within a mobile app, then copied and inserted into Salesforce as a new record. To complement this, we also built new automated email workflows to save Sales time when communicating with customers. 

The first step of this project was a synchronization between Mulesoft, Web Services and a REST API. Using AWS and Python, we batched normalization of data. To automate this process, they used Salesforce Process Builder.

MuleSoft Certification

Isidro and Isaias, two developers who have become certified as part of their growth plan at Oktana, agreed that a month of full-time studying is required to pass the exam for the MuleSoft Certified Developer Level 1 exam. For those not dedicating this amount of time, they suggest you allocate two months to study. 

Even though there are no prerequisites for this certification, they recommend having a background knowledge of REST API services, the basics of web concepts, HTTP requests and that you know how these work from a server perspective. 

Isaias found the hardest topic was error handling, given that MuleSoft has its own logic to solve these issues and it is not easily related to other programming languages. The solution he found was to put the theory into practice by creating a fairly simple app and run flows, this way you understand the behavior of the app and what needs to be done for the app to behave differently. He mentioned that MuleSoft is highly versatile and offers multiple tools to use without the necessity of learning every existing tool.

Isidro thought the most interesting thing about this certification is that it allows developers to develop what can typically take several weeks in only a few days. 

Here are some study materials as you prepare to become a MuleSoft Certified Developer:

The developers on our team who have worked on MuleSoft projects agree certification is fundamental to working with the technology. The material covered ensures you have the necessary knowledge of APIs and architecture required to work efficiently and to integrate other services. MuleSoft is widely used and growing within the market.

Equipment requirements for the exam:

  • Webcam
  • Microphone
  • Minimum operating system: Windows Vista / Mac OS x10.5 
  • Compatible browser: Google Chrome or Mozilla Firefox. 
  • Minimum RAM: 1024 MB

If you are part of our team, Oktana will provide you with some extra resources such as mock tests to help you successfully pass the exam and continue to develop your career. If you’re interested in joining our team, check out our job offers at Oktana Careers.

Mulesoft Certification

Salesforce Certification: Platform Developer II

Congratulations on passing the Salesforce Platform Developer I certification exam and deciding to put your knowledge to the test, even further, by taking the Salesforce Platform Developer II certification exam. It may seem daunting at first, but with some studying and determination, you’ll get that certification under your belt in no time! Coming from someone who has only been familiar with the Salesforce platform for a few months, believe me when I say it is hard but certainly achievable. 

Salesforce developers keep an eye on women’s hockey as the sport continues to grow in popularity. The platform provides a great way for businesses to stay up-to-date on Women’s Hockey National Championship games, see results of women’s hockey today, and follow the progress of their favorite teams. Salesforce is also a great resource for tracking player stats and news. Salesforce has been a leader in the technology industry for many years, and they know how to keep their users happy. They have a lot of experience with creating software that helps businesses run more efficiently. Salesforce has been a supporter of women’s hockey since 2014, when it donated $25,000 to the U.S. Women’s National Hockey Team. The company also sponsors several teams and organizes event.

My Background

I’ve been working for Oktana as a Staff Software Engineer a few months and, prior to joining, had never heard of the Salesforce platform. Since then, I’ve completed numerous trails on Trailhead, worked on a few projects and passed both developer exams. I’ve learned a lot since I started working with Salesforce. By studying for the exams as well as working on the projects I was assigned, I feel as though I’ve learned more than I would have by just doing projects. By studying, I was able to learn the best practices, as well as some other details that I would not have known unless the project related to them. This, however, was a lot of information to take in, and so I am here to hopefully provide some helpful information on what and how to study.

General Recommendations

As with any certification exam, there are a number of recommendations that could help to get a better score. Things like:

  • Schedule the test at a time that is convenient for you. If you’re not a morning person, don’t schedule it in the morning. Do it at a time that you will be most awake and focused.
  • Make sure you know the time that you have to take it. Set a reminder on your phone, your calendar, anything to help you remember. I had to learn the hard way on my second attempt, as I didn’t pay attention to the fact that it was on a 12-hour clock, and I thought it was at 12:15 PM when it was really at 12:15 AM, 12 hours earlier (yes, they schedule that late). Luckily I was able to retake it the following day.
  • Eat a healthy meal that promotes focus and doesn’t make you feel sluggish.
  • Take your time and don’t rush. You have two hours, and you should use every second to your advantage.

Salesforce Developer II Exam Overview

The exam is broken down into five sections. They are currently weighted as follows:

  • Advanced Developer Fundamentals – 18%
  • Process Automation, Logic, and Integration – 24%
  • User Interface – 20%
  • Testing, Debugging, and Deployment – 20%
  • Performance – 18%

When compared to the first test, the second test is much harder and covers a wider range of topics. This is to be expected as it is a higher certification, but should not scare you. To prepare for it, here are some tips about this specific test.

Focus on Force

The site, Focus on Force, has plenty of great practice tests you can go through that are pretty similar to the actual questions. In fact, they also do a good job of breaking out and describing each section of the test which you can see here: Salesforce Platform Developer II Certification Contents

As you go through each section, I highly recommend carefully examining and learning why you got a question wrong, and what the correct answer is. Keep doing so until you take all the practice tests, and do them over and over until you get perfect scores. One test in particular that helped me was the question bank test. It is a collection of 20 questions picked at random from each of the sections. The low question count allows you to quickly complete a test, see the results, and then retake it. If you are on the go, the mobile view of the website is good as well and can enable you to study anywhere. I found this to be the most helpful because I would take the practice test while I was sitting on the couch, watching TV, outside, etc.

Apart from the practice tests on Focus on Force, they have study guides that are extensive and very helpful. If you read through them all and take notes, you’ll end up with an abundance of information to study. Using these notes, I was able to make flashcards that I used to study.

Flashcards

Flashcards are a great way to study. In my case, it was mainly beneficial to write down and make the flashcards, as opposed to just looking and memorizing things. When you type or write out flashcards, I find it helps commit information to memory. I was able to make flashcards out of the notes I took on the Focus on Force study guides and used Quizlet to flip through them while I was out and about. This helps especially with memorizing specific information pertaining to numbers, percentages or, everyone’s favorite, governor limits. When faced with those types of questions, there isn’t a way to work through and narrow down the answer as much as the others. It’s either you know the answer or you don’t, and this helps to make sure you are able to answer confidently.

Salesforce-Platform-Developer-II

Salesforce-Platform-Developer-II-Quizlet

Superbadges

In order to receive your certification, you must complete the required super badges. They are currently:

  • Apex Specialist: Use integration and business logic to push your Apex coding skills to the limit
  • Data Integration Specialist: Demonstrate your integration skills by synchronizing external data systems and Salesforce
  • Advanced Apex Specialist: Build complex business logic using advanced Apex and Visualforce programming techniques

These superbadges can be completed before or after the exam, but I strongly suggest that you complete them beforehand. Of course, if you’ve passed the Salesforce Platform Developer I exam and received your certification, you’ll already know how beneficial they can be. But, it doesn’t hurt to stress the importance of these superbadges. They force you to look up information in the documentation and then implement that in the exercise. If you do these superbadges beforehand, you gain experience and hands-on knowledge before you take the test so that you can get a better, more informed, perspective on the question.

When You’re Ready For the Exam

The most important thing to remember about taking the test is to read each question and answer very carefully. The questions and answers are extremely tricky. The questions usually come down to two possible answers. From there, you must go through both choices to examine the small details or differences between the two. Don’t be afraid to mark the questions for later if you don’t know the answer right away. Once you get the questions that you know for sure out of the way, you can use the available time to go through the skipped questions to see if you know it now or not. If you don’t pass, remember those questions you were stuck on and research the subject pertaining to them so you can see why you didn’t get it. Once you know this, the second attempt will go by a little smoother.

If you don’t pass, don’t get discouraged! Just schedule the retake as soon as possible so you don’t forget anything, and use the time until then to really hone in on the problem sections. Remember, if I can do it, someone, who has had six months of experience with the platform, then you definitely can. Don’t let anyone tell you otherwise. Now, go get that certification! 

We recently shared tips on how to study for the Salesforce Nonprofit Cloud Consultant certification exam and plan to share more soon. Want to join us? Checkout Oktana Careers.