My journey working with a reputable charity based organization in the software development industry.

Francisco Barrios
9 min readOct 23, 2020

One of my first ever team-based positions I’ve held in the development field for a client, was being a back end developer. The experience it self was very interesting and unique. We were assigned too further develop and improve their application with scalability as a major focal point.

So what is Eco-Soap Bank?

Eco-Soap Bank’s stated mission is to fight the spread of preventable illnesses caused by a lack of access to soap, to reduce the waste generated by the hotel industry, and to provide livelihoods to economically disadvantaged women.

Eco-Soap Bank is an American non-profit organization founded in Pittsburgh, Pennsylvania in 2014. The organization collects used soap from hotels located in Cambodia,employs economically disadvantaged women to sanitize and process the soap into new bars at local hubs, and partners with other organizations to distribute the soap to schools, communities, and health clinics. Eco-Soap Bank also provides some soap to women in village communities and trains them as soap sellers.

Learn more about project Eco-Soap here: link

From start to finish!

When we started, we initially began with the planning and research phase. As a collective we’ve held meetings and traded ideas and approaches with each other until we corresponded or settled on an agreement on direction.

Click here for Whimsical Link for visuals

Technical Research and Planning phase:

What do we do during our technical research? User stories!

When breaking a release into a user story, the first thing I do is shift perspective. So as the user, what would I want to be able to do while I am on this site? One release we can then determine would be one that we discussed during our Trello planning phase; “As an Authenticated buyer/user, I should be able to view orders I previously placed” or “As an admin we should be able to view all orders”. So now we have a few user stories that we can then break down into individual tasks to bring forward to development.

What to do next? Break it down even more!

Now that we have multiple user stories, we can determine how to implement it task by task individually. We can start by deciding that wherever the user may be initially (home page/dashboard) they will need a button to navigate to a separate component where we will mount and map the desired information. We can flush out a button leading to our new component that would be created to house the desired functionality. This will map over the order information received. As we plan this out, we can start breaking down the things we will need to implement such as axios calls to our API to retrieve our desired data from our tailored endpoint(s) with Basic CRUD operations, in this case a mounted GET requested. We can plan to mount this call with a useEffect or CDM on this new order component. Our RESTful Node API will connect to the Ecosoaps GQL API to retrieve and or mutate information using gql syntax + gql lib in our backend helper functions (orderModel).

Time time time… how long will this all take though?

The process to properly estimate the completion of these desired functions seems straight forward until you account for unexpected occurrences such as bugs, deployment issues, time spent researching or other misc variables. Moving forward we’d start by creating a basic button in the home page / dashboard to navigate the user to our desired component with a use of routing, useHistory hook and other misc means of basic redirection. This should take about 5–15 minutes to flushout with no distractions taken into account. In this case we need to create a component to house the desired order information, so we’d create a new js file and name it something like “orders.js” and begin to work in there. The initial creation of a basic component shouldn’t take more than 5 minutes, however additional implementation of features and correctly setting up the maps + useEffects will eat up an estimated time ranging from 30 minutes — 2hrs (adjusting for unknown complications such as bugs or research requirement). After this we’d move to deployment and testing locally to see if we like our changes, and polish a bit onward. This process can widely vary from 30 minutes — 3hrs + depending on if we feel satisfied with what is expected from our client.

So now that we planned out our initial start, it’s time that I start getting to work! What have we built as result of all this effort?

I was working on the back end portion of our project with my colleagues. We focused on getting our RESTful API up and running smoothly to provide end points with relevant data for our front end to utilize.

Implementing mock data:

We utilized the faker lib to implement randomly generated mock data. The faker lib allows us to use their lib to input all fields randomly with relevant/accurate mock values for our front end to utilize in the form of seeded data. In this screen shot we define the key fields and then invoke the faker methods from the lib for their value fields. In the future we can create a for loop to populate the seeds, preferably in a hash table to optimize the run time complexity to O(1) for traversal.

What does our API do and what are the pros?

  1. The back end that we’re building currently acts in essence as a ‘proxy’ API connecting our stakeholders back-end/API to our own without directly communicating with our front end. There’s multiple contribution points and user solutions that our back end provides to the overall project, some would include:
  • Ease of access for our front end to utilize and play with data provided without having to query the stakeholders API directly for specifics using axios calls or other means to fetch data as a response and comb through the desired data. We can tailor end points to meet specific needs in layman’s terms.
  • Our stakeholder is utilizing a graphQL API while our back end utilizes a RESTful API with node js. GraphQL and REST are separate types of API’s. Our REST API utilizes multiple end points and a wider array of implementation/use for retrieving and managing larger amounts of data.
  • We reduce security risks when operating with multiple APIs. Generally, any sensitive data or information is generally invaluable, so protecting these assets is critical and assessing how much we invest into asset protection must also be taken into account and calculated accordingly. Solutions would be applied middleware, authentication and token/hashing while increasing levels of encryptions used if desired or needed by the stake holder. (Scalability without manipulating the original DB/API)
  • Utilizing our proxy API we can incorporate it to prevent the front end from becoming cumbersome both in size and applicational use.
  • From a technical perspective our API handles all the order handling, so we solve the problem for the user in which they were previously unable to complete the automatic ordering process.

From ideation to delivery — the process:

  • Ideation: We established a wireframe and user-flow for our desired project.
  • Ideation: We noted down & researched all of the required technologies expected to be used in this project (Redux, Redux toolkit, Node express framework, Jest, GQL/PG, Stripe, AWS, Docker, Swagger, Commitizen — etc ), while continuously adding on in real time in order to remain adaptable to our stakeholders vision.
  • Ideation: establishing what our API’s intended purpose was for, and how to best adapt and incorporate those needs into our overall project by having it’s emphasis focused on the order handling process as instructed by our stakeholder’s vision.
  • Delivery (back end focused): We set up our RESTful API to act as a proxy connecting to the AWS Eco Soap API using GQL to query responses and install our projected dependencies.
  • Delivery: Flushed out a data model using db designer (This is subject to change and the updated version is on a separate schema, this is just a quick thrown together example)
  • Delivery: Created knex migrations and configured knex with postgres and utilized docker files to create an image and host it on a container to avoid team based environmental issues.
  • Delivery: Seeded our API using Faker library for mock data.
  • Delivery: Flushed out an orderModel (helper functions) and orderRouter (end point routing) and provided mock data in the form of seeds to our front end as a JSON response for our front end to utilize and request via axios calls.
  • Delivery: Incorporated unit testing via Jest to test all back-end end points for our order model.

What are some of the issues and challenges we faced?

Keeping it simple, the main bulk of challenges that we didn’t foresee would most likely be the time spent debugging and troubleshooting both existing and new code. We spent more time reading and understanding the code base as opposed to writing as a result. We also had to research technologies as a group that were new to our team such as postgres and Gql. While setting up docker we also dealt with Operating System complications as well.

Working with a team:

During the project our 4 man web team was divided and assigned tasks to complete. The team member assigned to the back-end alongside myself is named Kolade. Kolade and I pair programmed, reviewed and researched all the technologies expected of us to implement. During the week, we spent about 2–5 hours a day as a pair working towards completing the tasks expected of us. Circulating between driver and navigator, I spent the majority of the time driving and attempting to incorporate both of our solutions and ideas whenever applicable. We delivered our product as a result of our continued collaboration including help from our PL for guidance and documentation.

Pros & Cons to working alongside a new team:

Pros:

  • Fun and entertaining work environment when personalities complement each other.
  • Easier to exchange and access relevant information spread across the team.
  • More efficient progression when tasks are communicated and worked on asynchronously

Example: When initially researching and implementing testing methods, we explored extra incorporation such as faker while seeding our test cases and had an awesome time in the process pair programming.

Cons:

  • When conflicts of interest emerge, assertive personality types tend to clash more than desired.
  • When two people work on the same file, deciding which changes to discard or move to production can be a hassle.

Example: A team member and I worked on a router model, both completed the task but only one completion was moved to production while the other discarded. I learned to follow our guidelines more thoroughly in order to meet exp

Self reflection:

I honestly learned more than I expected when working as a team on this Labs project. I was refreshed on critical skills such as patience, understanding, compassion, respect, integrity and empathy. I had a general understanding and good practice on these skills prior, however incorporating in this environment proved to be more involved, yet rewarding and vital to the overall arc of our final product. I learned that a healthy team environment and relationships among your colleagues can greatly expedite both production and the efficiency of the team as a whole.

You’re as strong as your weakest link. `No man left behind` mind set. Help others the way you’d like to be helped when needed.

--

--