As a former Marathon Oil production and reservoir engineer who has personally managed nearly 1,000 wells in the company’s core play, Jesse Filipi understands first hand how time-consuming set point optimization can be. During his time at Marathon, the company underwent a massive well expansion. But, despite the steep growth curve, Jesse felt that he and his team were not given the tools to be successful. Considering that the engineering team is effectively the revenue center of any production company, he knew there had to be a better way to effectively manage large scale well operations.
Today’s unconventional oilfield optimization technology stack is usually outfitted with some combination of a POC and/or VFD, connected to a SCADA system, with an optimization software package layered on top. This stack successfully achieves remote visibility and control, but does little to address the process and human-error aspects of optimization. Set-point optimization (SPO) is a high-value activity that follows well-defined processes, making it highly suitable for automation. However, the application of these processes is heavily manual and routine, requiring experienced staff and significant man-hours for successful implementation. This typically results in engineers focusing the majority of their attention on the highest producing or “most important” wells, while leaving the rest of the field unoptimized. In short, there are too many wells, and too little time.
So, then, how much of the field is actually being optimized? Ambyint has found through our own experience that, compared to a baseline well state and regardless of operator size or basin or wellsite technology, less than 20% of the wells that enter our system are truly dialed in. This falls directly in line with the Pareto Principle, which in this case supposes that 20% of the wells are receiving 80% of the attention.
Consider the following example: You oversee a field of 500 wells. With a field average of 5 strokes per minute (SPM), that’s over 1 billion strokes annually. With roughly 10 discrete pieces of information to look at in the optimization workflow that can occur over multiple time series, that’s (conservatively) 2 million data points to consider annually. Add to that the human factor, and you now have the perfect storm for significant process dispersion in optimization outcomes.
That may seem daunting, but it just so happens that these conditions of large volumes of data and repetitive, well-defined processes are perfect for a machine, which can direct 100% of its focus on keeping wells dialed in and thereby eliminating process dispersion. This allows humans to focus on what they do best, some examples of which are shown in the lower right hand quadrant of the matrix below:
At Ambyint, we are using Automated Set Point Management (ASPM) to allow a machine to place its sole focus on dialing in 100 percent of the wells, 100 percent of the time. The graphs below highlight the value to be realized in having a machine lean out the less-than-optimal set points:
When we set out to remediate the operational and technological gaps in the process, we began by addressing data and information losses occurring in the legacy chain at the wellsite. To correct these information losses, we developed an IoT-enabled edge device, the High-Resolution Adaptive Controller (HRAC). The HRAC is directly connected to our cloud and production optimization platform at all times, sampling and producing high frequency, time-synced, event-based, stroke level data.
Because it is directly and securely connected to the Cloud, the system enables true machine learning, always ingesting and internalizing new data with adaptive control. This sets the foundation for the consistency, quality and reliability of data needed to enable artificial intelligence initiatives. By comparison, conventional technology presents a complex integration problem. Significant information losses occur as a result of this architecture, and data quality is not of the caliber needed to perform AI initiatives.
Ambyint was recently given the opportunity to apply ASPM to the wells of three different major operators in the US across 5 different fields, all having major label POC+VFDs on horizontal wells. The wells underwent an initial classification during a baseline period, followed by a series of recommendations over the course of multiple months and were classified again at the end of the pilots. The slide below depicts the results of these pilots, and the improvements are clear:
So, what’s the value of set point optimization? In the end, everything comes down to the business case. Returning to the 500-well field example, we can apply the initial classification percentages to the baseline data, and the post-ASPM classification percentages to the ‘after’ data. What we see is about $20 million in value creation from ASPM alone across uptime, uplift on underpumping wells, electricity savings, and failure reduction through linear stroke decreases on overpumping wells. This excludes the additional value savings that could be realized from other levers such as savings in overheard and labor, not to mention the additional value creation realized when the existing team is allowed to set their focus on higher value activities.
If you would like to learn more about set point optimization and the results of the pilots in this article, Jesse goes into detail in the full webinar, available on YouTube here:
If you found this article helpful and want more content from Ambyint, sign up for email notifications to the right, or contact us at firstname.lastname@example.org/rdambyint2018!