Weatherford International's Frequently Asked Questions page is a central hub where its customers can always go to with their most common questions. These are the 34 most popular questions Weatherford International receives.
As LOWIS is a complex software system there is the potential for users to encounter the occasional error when using the application. Before submitting a support request, let us first look at the different types of errors you might encounter and the steps you should take when they occur.
LOWIS Client Freezes:
If the LOWIS client stops responding to input it could be due to a couple of reasons: it is waiting for a response from the server or it is trying to do a large amount of processing. By default, the LOWIS system is set-up with a 10 minute timeout. The client will wait ten minutes to see if there is a response from the server before giving up. Depending upon the type of screen you are on, you may eventually see a message such as:
If this occurs, you should first contact your local LOWIS Administrator, as it indicates an issue talking to the LOWIS server. There may be an issue on the server side that the administrator should address or that they need to contact support to look into further.
LOWIS displays an error message:
There are a number ofexpected warning messages within the LOWIS system. These are there to inform users when there is a problem with configuration or data preventing something, be it a card, well test, or well, from being analyzed or evaluated. These can also appear if you attempt to improperly use the system, such as trying to analyze a card without first selecting one.
If you have questions about these messages, you may first want to consult the LOWIS documentation and knowledge base articles. If you are unable to find the answer to your question there, contact support for an explanation of what the message means. Please include a screenshot of the message when submitting any questions.
However, if you get an error message that indicates a script error or references a file and function, like in the examples below, please submit a support ticket. Please make sure to include a screen shot of the message and as much information as possible about what you were doing when you encountered the error. Try to include the well, or device, that you were working with, the screen that you were on, and any steps leading up to encountering the issue if applicable.
LOWIS Client Exits Unexpectedly:
If the LOWIS client exits unexpectedly, you should submit a ticket to the support staff. The LOWIS client may generate a memory dump on your local machine about the error, which will usually be indicated in the error message.
The next time you launch the client it will ask if you would like to upload the memory dump file to the server. Please select yes, and make a note of this when submitting a ticket to support. Additionally, please include as much information as possible about the steps you performed before the client exited.
View ArticleVersion 6.6.0.157
Released on July 3, 2019
Enhancements
Configuration
Water Alternating Gas (WAG) Injection Well configuration is now supported.
PCP Rod data validation and auto-filling has been enhanced.
Catalog
The Catalog has been merged into a single SQL Server database.
The database name can now be customized during configuration.
Bug Fixes
Configuration
ESP motor and nameplate data can now be added to the catalog database correctly.
Design
In Design Parameters, Jet Pump power fluid data can now be set correctly.
In Gas Lift Design, an Injection Depth issue has been addressed.
In Plunger Lift Design, the Excess Gas value is now correct.
Tuning
For Condensate wells, Gas Lift Pressure Survey tuning now works as expected.
A PCP IPR Tuning issue that previously caused crashes to occur has been addressed.
WAMI
Gas Lift Pressure Survey Tuning and IPR tuning now work as expected.
For Condensate wells, an issue related to Gas Lift Performance curve generation has been addressed.
QuickGen
In the QuickGen utility, an issue related to ESP mid-perforation depth configuration has been corrected.
View ArticleVersion 6.6.1.102
Released on October 25, 2019
Enhancements
Configuration
Long names up to 128 characters are now supported for the Gas Lift Valve Manufacturer and Model.
Long names up to 128 characters are now supported for the ESP Manufacturer, Pump Model, and Motor Model.
WAMI
Gas velocity profiles and erosional velocity are now available for all well types.
For PCP wells, IPR tuning, L-factor tuning, and wearing factor tuning are now supported.
PCP analysis and performance curve generation are now supported.
Bug Fixes
Catalog
PCP drive head inputs can now be saved as expected with the selected units in the PCP catalog.
Tuning
In Match Pressure Surveys, the PCP L-factor can now be tuned correctly.
WAMI
An issue with the creation of Plunger Lift well models has been corrected.
Known Issues
During the upgrade process, all WellFlo components including WAMI are uninstalled. If WellFlo is installed on a ForeSite server, the WAMI component will be removed when WellFlo is upgraded and ForeSite will be impacted until WellFlo is installed again.
If you run WellFlo while ForeSite is running the Weatherford Well Modeling service on the same server, it may fail unless an extra WellFlo license is available.
View ArticleThis video will walk through the Remote Service Manager's Automatic Service Recovery function. We will discuss vaious Recovery Stages, Minimum Rumtime requirements, and alternate backup paths. We will also demonstrate a realtime recovery effort.
View ArticleCygNet Bridge 4.0
We are pleased to announce that CygNet Bridge version 4.0 is now available for download from the Weatherford CygNet download site. The new release provides updated functionality including expanded support for CygNet Mobile and a new CygNet Bridge API feature.
Highlights of additions and changes in CygNet Bridge 4.0 are listed below. For more details, see the CygNet Bridge v4.0 Release Notes document.
CygNet Mobile With the release of CygNet Bridge 4.0, CygNet Mobile also supports using the CygNet Operator application on mobile devices using Android OS 5.0 Lollipop or later. CygNet Mobile push notification processing has also been optimized to better handle large notification workloads.
CygNet Bridge API CygNet Bridge 4.0 also contains the newly available CygNet Bridge API feature that you can use to build custom applications for securely accessing your CygNet data from outside of your CygNet installation, via data interactions over the web, via the cloud, using a web service, or via a mobile device.
Access the CygNet Bridge 4.0 software release and any future patches from the CygNet Download site, on the Weatherford Software Support portal (login required): https://customer.weatherford.com/CygNet/
If you need help accessing the software download site, contact CygNet Support at: [email protected] or 1-866-4-CygNet .
Download the CygNet Operator application for mobile devices from:
Apple devices - iOS App Store > search for CygNet Operator ( https://www.apple.com/ios/app-store/ )
Android devices - Google Play Store > search for CygNet Operator ( https://play.google.com/store/apps/details?id=com.weatherford.CygNetMobile )
Note: Accompanying the CygNet Bridge 4.0 release, the separately available CygNet 9.2 release includes new and updated functionality and features introducing or enhancing Nexus, Canvas, CygNet EIEs, CygNet Measurement, and the CygNet Well Test Module. Refer to the CygNet 9.2 article in this section for more information.
View ArticleWe are pleased to announce that CygNet version 9.2 is now available for download from the Weatherford CygNet download site. The 9.2 release provides new and updated functionality and features including Nexus, Canvas, CygNet EIEs, CygNet Measurement, and the CygNet Well Test Module.
Highlights of additions and changes in CygNet version 9.2 are listed below. For more details, see the CygNet v9.2 Release Notes document.
Highlights
Nexus Service
The new Nexus server is available in the CygNet v9.2 release to operate with your CygNet software installation. Nexus enables you to publish your CygNet data to an MQTT server, so that you can provide data accessibility to subscriber clients using MQTT messaging.
Canvas and Canvas View
Canvas, the CygNet HMI design tool, offers a significant feature expansion in the 9.2 CygNet release. The new version provides many new or modified features. Newly available controls include the Detail, Donut, Linear Gauge, Sparkline, and Tile View controls. The Tag Chooser control has been modified so that it now supports group hierarchies. Other usability enhancements include the addition of Canvas style sheets, a find and replace feature, support for multiple control selection and property editing, a new Controls tab on the Screen pane, an Include in Script control property, and other application-wide enhancements. Visit the CygNet blog to access webinars about using some of the Canvas features.
CygNet EIEs
CygNet EIEs provide several new features for version 9.2.The CygNet remote device editor now contains a Points page so that you can view all points associated with a remote device.
Several new EIEs are available in this CygNet release. Additions include a pair of IoT EIEs, the IoT EIE and IoT Sparkplug EIE, and two OPC EIEs, the OPC Lufkin EIE and the OPC Weatherford EIE. The IoT and IoT Sparkplug are template-driven EIEs that use the MQTT protocol to subscribe to data that is published to the MQTT server. The OPC Lufkin EIE enables CygNet host communication with field devices supporting standard OPC protocol, in conjunction with the OPC Comm EIE. The OPC Weatherford EIE enables CygNet host support for dynagraph card features unique to the Weatherford RPOC device, using standard OPC protocol.
CygNet Measurement
New CygNet Measurement features for the version 9.2 release include a new Balance Contribution report, an optional new Export: PGAS Data XML command, and several usability modifications to existing reports and commands.
CygNet Well Test Module
The CygNet Well Test Module is now available for use with CygNet 9.2. This separately licensed module is integrated with CygNet SCADA to track oil, water, and gas (OWG) values for tested wells in a production facility. Additional component installation and/or configuration are required prior to using the new module.
There are many other enhancements and fixes contained in CygNet v9.2. We encourage you to upgrade your CygNet installations and access the new functionality.
Access the CygNet 9.2 software release and any future patches from the CygNet Download site, on the Weatherford Software Support portal (login required): https://customer.weatherford.com/CygNet/
If you need help accessing the download site, contact CygNet Software Support at: [email protected] or 1-866-4-CygNet .
Note: Accompanying the CygNet 9.2 release, the separately available CygNet Bridge 4.0 release includes new functionality and features introducing or enhancing CygNet Bridge functionality, such as support for Android devices for CygNet Mobile, and a new CygNet Bridge API feature for providing secure access to your CygNet data when external to your CygNet installation. Refer to the CygNet Bridge 4.0 article in this section for more information.
View ArticleWe offer our clients more than tools and services. To ensure that you get the maximum benefit from our artificial-lift and production-enhancement technologies, we developed a comprehensive series of production training programs.
These courses ensure complete technology transfer and sustain the performance improvements gained by using our technologies, products, and services. Each course features basics on technology, including operations, case studies, and examples that vary from hand-calculations to the use of interactive software packages.
Each course is presented by qualified product leaders and trainers, each of whom is experienced in the field and in teaching petroleum engineering.
Go to the Training Schedule
On-Site Training
Any training course can be held at your location, which gives you a cost-efficient method of teaching large groups in your organization. The timing and content are flexible to meet your needs. You can also sponsor a course at your location and invite individuals from other companies to participate with you. Email us for more information.
How To Register
To register, go to the training schedule. Click the register button next to the course you would like to attend and complete the registration form. To ensure that you have a seat in any given course, please register at least four weeks in advance.
Course Fees:The course fees are listed on the training schedule page and on the registration form. The fee includes a trial software license (where applicable), manuals, and documentation. We encourage you to bring a laptop when indicated.
Payments:Payment for courses should be made in advance. This can be done by sending a check with the emailed copy of your application form, providing a credit card number with your registration, or alternativelywe can issue an invoice provided that a work order or purchase order reference is available. We cannot confirm enrollment in the course without payment.
Meals and travel:Travel, meals (with the exception of lunch), and accommodation expenses are the responsibility of the attendee.
Cancelation:Should you need to cancel enrollment, we will refund the full fee less a 5-percent administration fee if you provide written notice at least 21 days prior to the start of the course. We cannot issue a refund for cancelations received with fewer than 21 days' notice, but registration may be transferred to another employee from the same company at any time.
Should you wish to cancel a booked course in favor of another course within the same calendar year and of the same duration, we will not charge an administration if we receive written notification at least 21 days before the start of either course. If the change is requested within 21 days of the start of either course, we will charge a 5-percent administration fee.
Should you wish to transfer from a course in the current calendar year to a course that occurs in the following calendar year, the difference in fees (if applicable) must be made before the start of the course. No other transfer options are available.
Weatherford reserves the right to cancel a course for reasons of insufficient registration or force majeure, in which case all payments will be refunded.
View ArticleCurrently ForeSite Production Optimization Platform supports full Optimization for RRL wells through the Lift Optimization and Failure Management modules. Incremental releases with additional RRL enhancements will be released monthly.
A new major release will be available at the end of Q4 which will add functionality for optimizing Naturally Flowing, ESP, Gas Lift, and Injection wells. This release will also enable users to optimize their surface equipment and reservoir, in addition to centrally manage Well Services.
View ArticleFor this conversation, we will be looking at information in both real time data and WSM data. In order to acquire the necessary information to analyze this, you will need access to the LOWIS Client and the SQL Server database at a minimum.
In general, wells will fail to sync to WSM because of a discrepancy in the API numbers or the LWNAME value. The reason why is that these are meant to be identifiers, so it is challenging for us to automatically identify which is identifier is correct. Therefore, the focus of this document will be to know how to correctly verify the data in WSM matches what is in BFile as well as how to modify them should these ever be different.
Data
In general, we are interested in four pieces of data:
API 10
API 12
API 14
Well Name
Real Time
The first step is know where this information can be found. For the BFile data, we are going to be looking at the LOWIS Client. Specifically, we are interested in the Well Group Configuration grid under the Configuration sub-menu in the start menu. This view contains all of the defined wells in the BFile database. A defined well is one that is currently accessible in LOWIS. When you delete a well, it is flag as undefined, among other things. This view contains all of the real time information that we are interested in.
API 10 = Unique Wellbore ID (API 10)
API 12 = Unique Wellbore ID (API 12)
API 14 = Unique Wellbore ID (API 14)
Well Name = Well Name
This table also contains some important information about the WSM connection. These columns might be hidden, but the aclPrimaryKey and WellCompletionXRefKey are both columns with very useful information.
WSM
On the WSM side, there are our tables that may be updated with information during the nightly sync from BFile:
WellCompletionXRef Completion
SubAssembly Subassembly
Assembly Assembly as a whole
Well Well at the surface
The tables are related to each other with specific key columns, but they all contains API data that is expected to match up.
WellCompletionXRef
API 10 wxrAPI10
API 12 wxrAPI12
API 14 wxrAPI14
SubAssembly
API 12 sasSAID
Well
API 10 Weluwbid
The expectation is that the API numbers everywhere match up. The rows of the different tables need to match the corresponding key. For instance, in WellCompletionXRef there is a wxrFK_SubAssembly table, which will match a certain row in the SubAssembly table on the sasPrimaryKey column. These rows should have a matching wxrAPI12 and sasSAID.
The Link
The way that these are linked together to display information in the LOWIS Client is determined by the production.nav.xml file. In this file, there will be a function call that looks something like this:
That last one there is what we are interested in. The line WELUWBID =* WELUWBID means that the well data and WSM data is being linked together based on whether the Well.Weluwbid value matches with the real time’s API 10, the Unique Wellbore ID (API 10) column on the Well Group Configuration grid. Some customers join WSM and real time data on the WLCOMPXREF, the WellCompletionXRefKey column on the Well Group Configuration grid, and the wxrPrimaryKey column of the WellCompletionXRef table. Either way is fine, but you need to be aware of which one your system is using.
The above section describes how the client pairs a well in the navigator with its WSM data, but the nightly sync from BFile to WSM only links the data based on the WLCOMPXREF equaling wxrPrimaryKey.
To Sum Up:
BFile
Well
Assembly
SubAssembly
WellCompletionXRef
API 10
Wellbore ID (API 10)
Weluwbid
wxrAPI10
API 12
Wellbore ID (API 12)
sasSAID
wxrAPI12
API 14
Wellbore ID (API 14)
wxrAPI14
Well Name
Well Name
NavName
Linked To
???
Well
Assembly
SubAssembly
Linked On Column
???
aclFK_Well
sasFK_Assembly
wxrFK_SubAssembly
Linked To Column
???
welPrimaryKey
aclPrimaryKey
sasPrimaryKey
One more comment before we move on. Both job and component information is linked to the Assembly table. If you have multiple wells with the same API 10, then they should all display the same component and job data.
Analysis
Now that you know where all the data is, we need to discuss how to analyze the situation.
In the cslift\lift\database\log folder, there is a log file generate during the nightly sync from BFile to WellCompletionXRef. The log file ends with the date it was generated like so, BfileWSMSync_01222016.log, following this format: BfileWSMSync_MMDDYYYY.log.
This log file contains a record of all of the data that was updated for a given well, and it will display script generated and SQL generated errors if a well is not updated due to a failure. I am not going to get into each error that you may see, but essentially there are only a few types of problems.
Mismatch of APIs
Incorrect table links
Failing a unique constraint
Duplicates in BFile
Mismatch of APIs
The most common error that you will receive is that a set of API numbers is incorrect or cannot be updated. This can happen because of a number of reasons. What is important to remember is the process.
There is a certain flow to information that I will now attempt to demonstrate. For our example, we are going to look at well: MFU-400. As you can see below, the API 10, 12, and 14 values exist in BFile and it has been synced to WSM because the WellCompletionXRefKey has a value.
Here is our data in BFile. The assumption is that all data in BFile is what we want to see in WSM.
So, during the nightly sync, what row of the WellCompletionXRef table is it going to try and update the API 14 to 42003319070000? The one on whose wxrPirimaryKey is ONC000ZL.
Does the SubAssembly have a matching API 12 value? If not, then this would also need to be updated. So, what row of the SubAssembly table is it going to try and update the API 12 to 420033190700? The one on whose sasPrimaryKey is ONC0020O.
Does the Well table have a matching API 10 value? So, what row of the Well table is it going to try and update the API 10 to 4200331907? This one is a bit more tricky because you have the link from SubAssembly to Assembly and a link from Assembly to Well. So, it would look a something like this:
For this example, they are all the same. The sync would not have to change anything.
However, what if the API 12 had been modified? Then what all would need to the modified?
The wxrAPI12 and the wxrAPI14 on the WellcompletionXRef as well as the sasSAID in the SubAssembly table would all have to be changed.
Incorrect Table Links
What if the WellCompletionXRef data is accurate, but the SubAssembly data is not accurate? What if there is already a SubAssembly row that has the API 12 you want to change it to? A better solution in this situation would be to modify the link between WellCompletionXRef and SubAssembly, modifying the data so that the WellCompletionXRef record is linked to the SubAssembly record with the appropriate API 12.
Anytime you update a link like this, it is important to remember that there may be other row in other table that are also looking at the record that you just updated, and they might also need to be modified. Take the SubAssemly in our example table. The below query show all of the tables that may be linked to that SubAssembly record.
You would need to make sure to update all of the links from these tables to the new SubAssembly, not just the WellCompletionXRef’s wxrFk_SubAssembly. If these come up empty, then you do not have to worry about it.
Failing a Unique Constraint
The SQL database has some explicit constraints on its columns. Mainly you will come across are unique constraints. For instance, you cannot insert two rows into the WellCompletionXRef table with the same API 14 value or the same LWNAME. You cannot insert two rows into the SubAssembly table with the same API 12 value.
I also want you to be aware of the fact wells synced to WSM stay in WSM even if they have been deleted from the BFile database. Suppose you add a well in LOWIS with a specific well name, sync it WSM, and then delete it. If you add a well with the same well in BFile, the there will be no problem. However, you will be unable to sync the well to WSM because there is already a well with that well name.
Duplciates in BFile
Any data in BFile must be accurate before you can send it over to WSM. If there is more than one well with the same API 14 number in BFile, then it will give you an error before even attempting to bring the information over to WSM.
Resolving Conflicts
By now, you should have an understanding of where the information is coming from and how it is compared as well as how data mismatches can be identified. Once you have identified a data mismatch, the solution should now be far behind.
The goal of any conflict resolution, as far as this topic is concerned, is to have all the data in WSM match the data in BFile. If you do that, then you have succeeded.
A potential solution might be to manually update it actual API values in WellCompletionXRef, SubAssembly, and\or Well.
A potential solution may include modifying a link between tables, rather than the data itself. In this instance, please remember to update all other links to the record being changed where appropriate.
A solution may mean modifying the well name of a row in WellCompletionXRef, so that a well of that name in Bfile can be added.
View ArticleWeatherford's ForeSite Production Optimization Platform allows operating companies to capitalize on in-depth asset data analytics and make proactive decisions to maximize production.
The ForeSite platform processes real-time sensor data from every corner of an asset to provide a clear analysis of the reservoir, well, field, and surface network. This single platform provides engineering models and predictive analytics to help you evaluate, predict, and respond to ever-changing conditions.
ForeSite is a web-based software application that combines advanced RRL well management features with the power of CygNet SCADA. Its easy-to-use interface provides clear visual cues, so you can quickly evaluate each well's health, follow historical well trends, and predict the possibility of upcoming failures.
Features of the platform include:
KPI based Dashboards for understanding asset health
Gibbs and Modified Everitt Jennings methods downhole card calculations
Normalized and open card pattern matching
Data analytics for failure predictions based on Big Data and Machine Learning
Intelligent alarming capability
Perform real-time monitoring and alarming
Map view for wells with configurable heat map layers
Configuration of controller set points for optimization and commands
Instant trending and graphing
Workover recording and management
Interactive historic wellbore diagrams to assist with root cause analysis
Calculation and monitoring of mean time between failures (MTBF)
View ArticleProposed Wellbore functionality has been added to WSM to enable the users to visualize changes to the Wellbore Equipment after a drilling/ completion/recompletion/remedial/ routine job.
The proposed wellbore can be accessed through Services -> Job Management -> Job Events ->Proposed Wellbore.
It provides the following functionality:
Add, Edit, Delete and Update are available to update Proposed Equipment for the Wellbore.
Get the Current Wellbore Equipment to Proposed Wellbore so that the proposed/probable changes can be made.
Validate the changes made through Validate Wellbore.
Visualize the Proposed Equipment through Wellbore Diagram.
“Component Change Report” to display
List of Components that are in Current Wellbore but not in Proposed Wellbore.
List of Components that are in Proposed Wellbore but not in Current Wellbore.
The Wellbore reports from both Job Management and Enter Toursheet Wizard have been modified to add a menu item, “Copy Proposed Wellbore to Current Wellbore”. Clicking this menu items allows the user to directly fetch the entire Proposed Wellbore into Current Wellbore without having to manually add/edit each component.
All the menu items on Proposed Wellbore and Wellbore Report have security set to true which enable LOWIS Administrator to specify the level of access for different groups through LOWIS Admin->Users->User Permissions.
View ArticleLOWIS Configuration for Well Testing
Well testing in LOWIS is designed to track test values coming from a separator and, if configured appropriately, to automatically rotate testing between a set of wells. The well tests stored in LOWIS can be noted as a good test by a LOWIS user, or it can be configured to automatically accept a well test as good if its oil is within a certain amount compared to the previous test. In order to acquire test values, and maintain them, a separator is associated with a Remote Terminal Unit (RTU). The various discretes and meters configured on the RTU are used to track the accumulated oil, water, and gas (OWG) values and to turn header valves on or off.
The first thing required in Lowis is to create a generic RTU. To do this, one simply needs to open a LOWIS client and click on Start > Configuration > Group RTU Configuration.
Users can expect the screen to be initially empty until they add an RTU. A user can add an RTU using the green plus button (). Clicking this button will bring up a new popup window where the RTU information can be entered. The requested information will change based upon the Communication Port it is configured to.
Once the RTU is created, it can be treated like any other RTU in LOWIS. However, generic RTUs do not have a preconfigured list of registers. Any information that is desired to be pulled during a scan must be manually added in the form of meters, discretes, and analogs. Like other RTUs in LOWIS, the points configured can contain whatever type of value you want it to represent out in the field. If you want a discrete that is checking a tank level, or a meter to accumulate the amount of water that passed through in a 24 hour period, then a RTU simply needs to be configured to represent that information. It is important that the sensor sending data to the register in the field matches the user’s configuration in LOWIS, because LOWIS will assume that whatever was configured in the application reflects what is out in the field. For example, if a sensor measuring temperature is plugged into register number 13 and a user configures an analog with a register of 13, but names it Casing Pressure, then the value that shows up as “Casing Pressure” in LOWIS will actually be the temperature values.
There are a few sensor points that are very important for a separator to function in Lowis. The first arethree meters that are used to track the amount of oil, water, and gas in the separator. For generic RTUs, meters must be added from the RTU Meter Configuration child tab off the Group RTU Configuration grid.
From here, users can view all meters associated with the selected RTU. There typically are no meters added by default, so the grid should be blank. To add a new meter click the green plus button (). This will open a popup window which allows the user to specify important information about the meter.
Meters will need to be created for oil, water, and gas in order to track these values. These will be accumulators, in that they will be used to accumulate the values received from the controllers until it is reset. The register number will depend upon how the hardware is configured. Typically the function code and format specified here will be acceptable, but that too is dependent on the hardware configuration.
Once the meters are created, there is additional information that may need to be added (example: the OPC tag if it is an OPC COM port). This can be done using the quick edit option on this grid.
Once the RTU and meters have been added the separator is now ready for configuration. To configure a separator in LOWIS, open the Separator Configuration grid.
The first step is to add a separator by clicking the green button (). The separator ID needs to be an abbreviated separator name, while the description is the full separator name. The type is important, and the different options will be discussed later in this document. The RTU selected is the RTU that is to be associated with the separator.
The second step is to link the separator and meters by clicking the link button (). This will open a new window, where the user selects an RTU and then links the separator with a meters configured on the RTU.
to
It is important to ensure that when the RTU is scanned, the OWG values associated with the LOWIS separator will also be updated.
A header will automatically be created when a separator is added in LOWIS. In order to add additional headers, or modify an existing one, you must go to the Header Configuration child tab. If multiple headers exist for a single separator in the field, then the headers needs to be specified as parallel when added to this grid. The headers name is important because positions (covered shortly after this) associated with it have names that begin with that header name. When adding a header, it can be associated with an RTU that is not the separator’s RTU.
Positions are crucial to the effectiveness of the well test process. A position is associated with a header and an RTU. The person configuring the position must provide the correct function code and address to scan in order to receive the valve’s status, as well the correct function code and address for sending the command to turn the valve on and off.
Header and position configuration grids are both child tabs of the Separator Configuration grid.
The RTU, Meter, Separator, Header, or Position can all be deleted by using the red “x” button () on their respective configuration grids.
In order to be effective, the newly added positions must be associated with a well. This is done in the Well Testing Configuration view, a child tab of the Well Group Configuration grid. The well selected in the grid can be set to a particular separator and a particular position on that separator. Additionally, there are some rules the well can be required to comply with before it will start a test, like ensuring the well is In Service.
The final portion of configuration must be done in the Group Separator Status grid. This grid is located under the Status sub-menu. This grid is covered more fully in a different portion of this document, but the focus now is the WellTest Spread Sheet child tab.
In this tab, there are many well testing configuration options that can be modified for a particular well. Options of particular interest are:
The frequency of tests (every #days)
# of hours to purge
Do you want to automatically evaluate the well?
The maximum # of overdue days
# of pre-purge hours
Do you demand the well be tested?
Minimum test hours
Is the well active for testing?
Maximum test hours
The separator has now been successfully configured! When the RTU is scanned it will update the separator’s status and its individual positions, which will let LOWIS know if the well test has been completed or if it needs to remain in a testing. Once the test has been completed, LOWISwill save the well test information based on its accumulators and tell the separator to begin purging in preparation for the test well.
Viewing Well Test Information
Historical well test information is best viewed from the Test Results grid. This grid contains majority of information collected during a well test. This information is displayed on a well basis, based upon the well selected in the navigator. The date of the well test and the OWG values are available from this screen, in addition to other material. Users can also manually add a well test to the selected well or modify the information on an existing well test. They can also transfer a single well test, or multiple well tests, from one well to another if that needs to be done. A well test, or multiple well tests, can be deleted from this screen. Finally, users can export well tests to, or import them from, a csv file.
An important field is the SPT (Standard Production Test) code. The SPT code is used to denote if a well test is valid or not. An invalid well test would be one that is inconsistent with the expected oil output by a large margin. The different values indicate different things.
SPT Codes:
-1 = not yet been evaluated
0 = valid
9 = invalid
Wells in LOWIS can be configured to automatically evaluate well tests when they come into the system. In order for this to take effect, a well must be configured in the Well Test Spread Sheet child tab (available from the Group Separator Status grid). On this grid, the user must set the Auto Evaluate column to 1. If the focus is only on water, or oil, then the Auto Evaluate Oil or Auto Evaluate Water columns can be set to one. The Initial Test Date acts as the date for the first test date and is crucial for evaluation. Likewise, the Initial Water, Initial Oil, Oil Decline, and Water Decline values are all necessary to estimate what the test values should be by the time a test occurs. It then allows for a percentage increase or decrease.
Another grid that may be desirable to view well test information on is the Test Evaluation/Results Grid. This contains information about the most recent well test for all wells, with the test results grid underneath.
Viewing the Separator\Well Test’s Status
The best place to view information about the separator’s status, and the status of the current well test, is the Group Separator Status grid. On this grid, a user can view the separator’s state: whether it is Testing, Purging, Suspended, etc. They can see how long the separator has been in that state and how much longer it will remain in that state, where appropriate. It displays the current test value up to that point and the last time it was scanned. It also displays which well is currently in test and from what position. If there are any alarm messages, they will be visible here.
From here, a user can scan the separator, stop all well tests on a particular well test, begin the well tests again, or simply tell the separator to move on to the next well. Where appropriate, a user can download the well test sequence from the controller, clear alarms, or begin a leak check. Additionally, there are a large number of child grids accessible from this screen that are also very useful. While the Well Test Spread Sheet grid has already been covered above, the WellTest Well Status grid shows when the last good well test was collected for all wells on that separator. Users can view grids containing the point’s statuses, view all separator reports, see separator related comments, explorer the RTU directly via RTU Read\Write, and see the separator with its header(s) and position(s) in a graphical format.
Separator Status:
Well Test Status:
LOWIS Admin Configuration Options:
AUTOEVALSHOWZERO if this option in WELLTESTPROC is not Y, then a well test whose well is configured to not auto evaluate (ACAUTO = 0 or (ACAUTOO = 0 and ACAUTOW = 0)) will print that fact a message to the AARESULTDESC column saved on the welltest.
DONOTALLOWZEROOIL if this option in WELLTESTPROC is 1, then a well test with an oil value of zero will automatically evaluate to bad
MANUALWELLTESTSHIP if this option in WELLTESTDBSERVER is 1, then the NEEDSHIP column on the well test will be set to 1
DOAUTOEVALFORMANUAL if this option in GENERAL is N, then manual well tests will not be automatically evaluated.
SENDATPCOMMAND if this option in SITEOPTIONS is N and the SEPSATP column is set to one on the separator, then it calls a function that is supposed to set all valves to production.
SETSPT99ONDEL if this option in SITEOPTIONS is 1, then well tests will be deleted if they are set to 99. (Used only by certain customers)
WTPROCTIME this option in WELLTESTPROC is number of hours as a time delay before processing a separator when the RU is scanned.
WTPROCALARM if this option in WELLTESTPROC is not 0 or default, then an RTU will alarm when the separator changes
Valve Methods:
Valve Method1
This method has an individual address for each valve. The same address is used to move the valve to test or to production. An argument is sent to the RTU/PLC that tells the direction of the movement. Each valve has a discrete point sensor that tells if the valve is in test or not. This method supports parallel well testing.
Valve Method 2
This method services rotary valves. Each valve has an individual address, including the "blank valve". The blank valve is a special valve which has no well connected. When this valve is moved to the test position, then all well valves are in production. This is the way a production reset is done. The same address is used to move the valve to test or to production. An argument is sent to the RTU/PLC that tells the direction of the movement. Each valve has a discrete point sensor that tells if the valve is in test or not. This method does not support parallel well testing.
Valve Method 3
This method has an individual address for each valve but it has a production reset command. The production reset address is like is a special valve address. Each valve has a discrete point sensor that tells if the valve is in test or not. This method supports parallel well testing. The production reset address is given by the LEAK position on the header or by the one that has the PRODRSET bit set.
Valve Method 4
This method addresses 16 valves at a time. To move valves to test a 16 bit word is output to an address. For each 1 bit, the corresponding valve is moved to test. A production reset is done by outputing all 1s to another address. This method supports parallel well testing. The production reset address is given by the LEAK position on the header.
Valve Method 5
This method has an individual address for each valve. The same address is used to move the valve to test or to production. An argument is sent to the RTU/PLC that tells the direction of the movement. Each valve has up to 3 discrete point sensors that tells the status of the valve. These sensor addresses are sequential, the smallest is the production statue (1 = in production), the next highest is the test status (1 = in test), and the 3rd is the remote/local status. In order to test, the valve must be "in remote", "not in production", and "in test". This method supports parallel well testing. Code has been embedded that assumes that the controller type is ASC574.
Valve Method 6
This method addresses individual valves. To move a valve to test a function code (generally B6) and a well number is output. To reset one or all valves back to production, a function code (usually B5) and a relay number is output. This method supports parallel well testing. The production reset address is given by the LEAK position on the header.
Valve 7
This method addresses individual valves. To move a valve to test a function code (generally 67) and a relay number is output. To reset one or all valves back to production, a function code (usually 69) is output. This method supports parallel well testing. The production reset address is given by the LEAK position on the header
Miscellaneous Options:
Blank is a true or false value for a position that is only used when the separator is using valve method two. This determines what wells go into test and which go into production.
View ArticleConfiguring LOWIS to communicate with devices using OPC requires a series of configuration settings that have to be coordinated together for communications to actually succeed. These settings are placed in the appropriate OPC Manager, COM Port and Scan Task sections.
COM Port
To start off, the COM Port needs to be created as (or converted to) an OPC Port type (within the COM Port the Port Type setting will be OPC if this is already set). Once this has been configured, the important configuration settings are OPCMGRS and the pair of OPC Server and OPC Server Name settings.
The OPCMGRS settings need to set to the section name of the OPC Manager process that will be responsible for processing requests that are meant for the OPC Server process that this COM Port is being configured to interact with. Despite the plurality implied by the name, this setting can only point at a single OPC Manager process.
The other key settings are the OPC Server and OPC Server Name settings. OPC Server needs to be set to the ProgID of the OPC Server process that needs to be connected to, and OPC Server Name is the name of the machine on which the OPC Server process is running. OPC Server Name can be skipped (unspecified) if the OPC Server process is running on the same machine where the OPC Manager process will be running.
OPC Manager
OPC Manager settings are found in the Configuration section under Common->Server->Processors-OPC Manager Root, with there being one section for each OPC Manager configured on the system. The key settings in these sections are OPC Configs and Dbg Level. OPC Configs is a comma delimited list of COM Port sections that this OPC Manager is responsible for communicating with.
DBG Level is bit flag value that controls the diagnostic output that will be generated by this OPC Manager process. The flags are provided in a listbox, each with its own check box to enable/disable that particular flag. These flag values are detailed in the OPC Manager Debug Levels article.
Scan Tasks (Native)
For those scan tasks used for talking to non-OPC protocols being configured to talk OPC, there are some special settings that will need to be added when OPC is involved. The key setting that will need to be added is a setting for each COM Port (and named the name of the COM Port) that will be using OPC. The purpose of this setting is to specify the pattern that is to be used to generate the automatic tags for each register that is being requested. As an example, if we were trying to configure the SAMRPC scan task to talk to an OPC Server on com20 with the typical SAMRPC tag generation, a COM20 setting would be added to the SAMRPC scan task configuration with a value of LUFAUT.
There are 3 other settings that are specific to modbus devices and OPC. The first is OPC_RegOffset, which can be used to specify the register offset to be used just for OPC communications in case a different value is need than is needed for normal native communications (the default value for this setting is the RegOffset used for native communications).
The second setting is GenericIntegersAreUnsigned and requires a bit more explanation. The modbus specification doesn't really specify datatypes, and as such each modbus device has the option to have any datatype for each register as long as the software consuming it knows what that datatype is. For most of LOWIS's existance, 16 bit integers were just configured in LOWIS as shorts, whether the range of values were signed (-32767 to 32767) or unsigned (0 to 65535) was only considered within the context of the analog point the data was being placed into. With many OPC Servers not having built in register maps and relying on either the tag configuration to specify whether the value is signed/unsigned or requiring the OPC tag to specify this, it became necessary for LOWIS to now need to know whether these shorts are signed or unsigned. For newer modbus devices, care has been made to make sure that LOWIS's register map makes this distinction, but the older pre-OPC register maps have not be updated to have this distinction, so this GenericIntegersAreUnsigned setting was added to specify on tag creation that any registers not explicitly specified as signed or unsigned should be treated as signed (setting is 0) or unsigned (setting is 1). If only a few registers need this distinction it is often better to just update the register map for those few registers to be the correct designated sign type.
The third setting specific to modbus and OPC is GenerateModifiers. This setting (default is 0/off) is turned on when the OPC Server requires that the datatype of the register be included as part of the generated tag name so that it will know how to process the data out of the device's reply. There is also a COM##GenerateModifiers option available (where ## is the Com Port number associated with the OPC Server) to override this setting if a specific OPC Server instance needs a different setting than the others when more than one OPC Server provider is being used.
Currently, the following scan tasks support OPC communications:
AE Scan
Modbus Scan
8500 Protocol Scan
Scan Tasks (OPC Specific)
Scan tasks that are just OPC only have an additional configuration requirement. Each OPC specific scan task can only be used to talk to a particular OPC Server instance, so like the COM Port configuration it needs to have a OPC Server and OPC Server Name set of settings. For communication to work properly though, there must also be a COM Port configured with the same OPC Server and OPC Server Name values (at a case sensitive matching level), and there can only be one such COM Port configured. This requirement is due to the fact that when requests are sent to an OPC Manager for processing they include the COM Port number that is associated with the OPC Server instance to be used. To figure out this COM Port number the OPC scan task will go through the list of all available configured COM Ports and try to identify the one COM Port that has matching OPC connection information. The COM Port section found also identifies which OPC Manager instance the request should be forwarded to for processing.
View ArticleIn this article you will find alist of the flags that can be configured in the DbgLevel configuration setting in OPC Manager and when they might be of use. Keep in mind that all of these flags enable extra processing and extra diagnostic output from the OPC Manager process, which will increase the overall loading of the system where they are enabled and possibly impact performance. In general, most of these should only be enabled on recommendations from support to aidwhen diagnosing a problem.
The normalrecommended DbgLevel setting is 189 (DBG_LEVEL_LOCKSTRING + DBG_LEVEL_STARTSTOPS + DBG_LEVEL_DATACHANGE + DBG_LEVEL_THREADFLOW + DBG_LEVEL_REFRESH + DBG_LEVEL_HEARTBEAT).
DBG_LEVEL_LOCKSTRING (1) - This flag is no longer used.
DBG_LEVEL_LOCKSTRINGLVL2 (2) - This flag will output a line of diagnostic each time the lock critical section is entered/exited. This will generate a lot of diagnostic output and should only be activated if OPC Manager appearsto be locking up.
DBG_LEVEL_STARTSTOPS (4) - Outputs a diagnostic message at the start and stop of certain functions as well as at key points in the asynchronous data processing to allow keeping track of how OPC Manager is processing.
DBG_LEVEL_DATACHANGE (8) - Outputs messages during the processing of data received from the OPC Server and can be used to determine if the OPC Server is still replying to the OPC Manager process.
DBG_LEVEL_THREADFLOW (16) - Outputs messages during processing to allow tracing of how OPC Manager was operating when trying to identify causes for issues or crashes in OPC Manager.
DBG_LEVEL_REFRESH (32) - Used to identify when OPC Manager has requested a refresh of its data from the OPC Server because it hasnot received data from the OPC Server for a while (150\% of the scheduled time for the OPC Group in question).
DBG_LEVEL_VALIDCLSID (64) - Outputs the CLSID structures retrieved from the Operating System when attempting to connect to an OPC Server. This flag should only be turned on when there are problems connecting to an OPC Server.
DBG_LEVEL_HEARTBEAT (128) - Outputs diagnostic messages when OPC Manager is engaged in its heartbeat processing whichensuresthe OPC Server has been actively feeding data and is still reachable.
DBG_LEVEL_TABLEDUMP (256) - Outputs the OPC Tag and reply argtables for demand scans. Useful when trying to debug communication or data problems.
DBG_LEVEL_SAFEARRAYDUMP (512) - Outputs diagnostics about what data was received from the OPC Server and how it was reprocessed for LOWIS's use.
DBG_LEVEL_MSGDEBUG (1024) - Outputs diagnostics about the LOWIS message structures used for demand scan requests and the reply construction.
DBG_LEVEL_ASYNCREADDEBUG (2048) - Outputs diagnostics related to processing scheduled scanning requests.
DBG_LEVEL_SYNCTABLEDUMP (4096) - Outputs diagnostic when synchronous data reads are occurring.
DBG_LEVEL_TGETSVRSTATUS (8192) - Heavily detailed diagnostics used to help identify deadlocking issues in OPC Manager.
DBG_LEVEL_ABANDON (16384) - Diagnostics for when the internal abandon events get signaled.
DBG_LEVEL_GROUPLEAK (32768) - Diagnostics around when OPC Group objects are created/destroyed to help in tracking instances when they appear to not be managed properly.
DBG_LEVEL_TIMEOUTCHECK (65536) - Diagnostics when the internal variable used for communication timeout checking is reset to help in diagnosing issues where communications with the OPC Server appear to stop.
DBG_LEVEL_HEAPCHECKS (131072) - Enables calls into the C Run-time libraries to check the status of the memory heaps used within OPC Manager. This should only be enabled when memory corruption appears to be occurring as this will MASSIVELY slow down processing.
DBG_LEVEL_RELEASEITEMOUT (262144) - Diagnostics around the calls to release tag items from OPC Group objects.
DBG_LEVEL_ASYNCREADDBGERRS (524288) - Enables extra diagnostics when processing error replies from the OPC Server.
DBG_LEVEL_TAGIDDBG (1048576) - Extra tracking diagnostics when OPC tags are being assigned ids. Used to make sure that item ids are not being reused when odd data seems to be getting returned from the OPC Server.
DBG_LEVEL_WRITEDETAIL (2097152) - Outputs the values being written to OPC devices to check values that were sent.
DBG_LEVEL_ABANDON2 (4194304) - Diagnostics around checks where the abandon flag is also checked.
DBG_LEVEL_THREADINFO (8388608) - Diagnostics to track when new threads are being created.
DBG_LEVEL_VALUEFILTER (16777216) - Used to enable the outputting of a diagnostic file that tracks all data changes for specific OPC Tags, used to help in identifying when strange data is being received from the OPC Server.
DBG_LEVEL_SCHEDULEDDEBUG (33554432) - Extended diagnostics for tracking progress on scheduled scanning requests.
DBG_LEVEL_DOREQUEST (67108864) - Extended diagnostics from the main processing function within OPC Manager to identify requests it processes and where it directs them for processing.
DBG_LEVEL_TRACKDATACHANGE (134217728) - Extra diagnostics around start and end of processing of data packets received from the OPC Server.
DBG_LEVEL_TRANSACTIONIDS (268435456) - Tracks the allocation/release of the transaction id values passed to the OPC Server.
View ArticleLOWIS Server processes operate in a shared thread processing model, where there is only one "active" thread at a time while the other threads might be in a suspended state waiting for a reply. In each layer of processes (Processors,Scan Tasks, or Communications Tasks), pending taskson are divided based on the Communications Port they will ultimately be processed on. So, for example, if there are beam wells on COM1, COM5 and COM7 and injection wells on COM5 and COM6, then there will be at most 3 worker threads in use on the beam processor and 2 worker threads in use on the injection processor. This division comes from the days of serial based device access, where only one device at a time could be communicated with on a communication channel, so there was no point in trying to process more than one item of work at a time for a given communication. When a worker thread has prepared its request for the next level down in the hierarchy and has sent it out, it goes into a wait state until the reply is received. While it is in this wait state another worker thread is allowed to process its work, allowing for a controlled pseudo multi-threaded mode of processing.
As these worker threads are allocated by Communications Channel, there is no benefit to having more threads allocated than are needed for the communications threads. LOWIS processes allocate a few threads for non communications based processing (scripts and reports as examples), so the general rule is to have
NumThreads = 3 + number of communications ports to be talked to
For the examples mentioned earlier, this would mean a NumThreads value of 6 for beam processor and 5 for injection processor. Note that this is different that what would be expected in a true multithreaded application, where the general rule is to only have a maximum thread count equal to the number of actually available processor cores in the computer as any more would imply switching out active threads to run other threads.
IsParallel
In the early days of LOWIS, OPC communications were handled in the mscan program along with the other communication methods. In this case it was typical to setup multiple communication ports talking to the same OPC server to provide a mechanism to spread out a scheduled scan to allow it to be completed faster. When the OPC Manager system was created, these communication ports were effectively collapsed into a single communication port, meaning that these scheduled scans were now taking longer, even though both OPC Manager and the OPC Servers are fully multi-thread capable.
As a result, a new option was created (IsParallel in the Server section of the Configuration system) that turns off the system in LOWIS processes that distributes items to be worked on based on the final destination communication port and instead assigns the items of work to a thread as soon as a thread is available. The benefit of this is that for scheduled OPC scanning (which should have a near zero millisecond turnaround time) LOWIS can process scheduled scans very quickly. The downside is that if any non-OPC communication methods are in use, the entire processor will be waiting for threads to become available sincethey are now all occupied where with the communication port allocation they would all be on one thread's queue. This will turn that process into a complete bottleneck for all tasks.
In general, we only recommend turning on the IsParallel option for purely OPC systems, as significant processing impacts can occur in other communication environments. Please note that IsParallel should only be turned on at the Server level entry of the configuration system, it should not be enabled/disabled on a processor basis.
View ArticleAs LOWIS is a complex software system there is the potential for the occasional error to occur. When this happens on the server, a LOWIS process may fail. The LOWIS host will usually try to restart and re-run the last command, a default of five times, before letting a process stop completely due to an error.
If a process has failed that is set to run automatically, the status indicator in LOWIS Admin will show in red and the Processes count will show a lower value than the total expected number of running processes.
The first thing you should try to do is restart the process. After failing, any previously queued commands are lost. If the process aborted due to a particular command, then you may be able to restart the process by simply right clicking on it and selecting restart.
Even if the process restarts successfully, you should report the issue for further investigation. If you look at the Messages section of LOWIS Admin there should be a log of any process errors that occurred. Please include a screen shot of any errors that are noted during the time frame when the process failed.
Finally, if you have turned on the Process Dump On Abort option in the LOWIS Admin configuration a memory dump should be generated whenever a process fails. They may often be too large to directly attach to a ticket submission, but if the support staff feel that it is needed please work with them to transfer over this file to help diagnose the issue that occurred. If you do have this option enabled, you may also occasionally wish to check the directory where the memory dumps are set to be generated and delete older memory dumps that are no longer needed.
If the entire LOWIS service has stopped, you should try restarting it from Windows Services. The main LOWIS service is the LOWIS Host Service (pictured below). If it fails to restart, and it is not running under the Local System, first make sure that the password for the user selected as the Log On As user has not changed since the service was configured.
If the service still will not restart, you can attempt to run it from a command prompt window to potentially get more information about why it is failing. Two of the more common reasons for the service failing to start are the SQL database used by LOWIS is not running, as pictured below, or third party security software preventing the application from launching. If you receive a pop-up security message asking for permission to launch the application, look for a box to check that says to not ask again when running this software. This should allow LOWIS to be run as a service. You may also wish to check the activity logs from any security software to verify there are no instances of it blocking the service from starting.
If the service starts, but no processes launch and users are unable to connect it may be due to the license having expired. Looking at the server status in LOWIS Admin should show this as an alarm on the main Servers screen. If this is the case, please contact support.
View ArticleTo modify well groupings, you, or someone with administrative access to your LOWIS server, would need to go onto the server and run the LOWIS Admin application.
In LOWIS Admin, if you expand the Configuration section you should see a Navigator tab. If you select this tab it will display the Well Groupings. You can also select the other tabs to see the Well Conditions and Columns for well information shown in the LOWIS client navigator.
If you want to add a new grouping, you can right click on any white space in the Well Groupings and select New Group.
If you want to add a new item to an existing group, you can right click on the group and select New Item.
Or, you can select and existing item and select Clone to make a new item copying the fields from that existing item.
You can also select Properties if you want to modify an existing group. Any of these selections will open up a new window with three fields, the Title to be displayed in the LOWIS Client, the Primary Table which should almost always be left as MASTERWL, and the Filter you want to apply to the well list when this grouping is selected.
As noted above, you should generally set the Primary Table as MASTERWL which is the primary well list in the database. For creating a Filter, if you want to restrict by a specific field, you can look at the Info for that field in the LOWIS Client by right clicking on it on a grid. As an example, Group 1 Name is “FNAME” in the database:
For example, if you want to create a group where you only list wells with a specific group 1 name “Testing” your filter should have the following:
You can use AND or OR to write more complex statements, such as if you want to list all wells with a Group 1 name of Testing or Drilling.
When you are done making changes, make sure to click the Save button at the bottom of the Well Grouping window:
All changes are temporary until you hit Save. If you wish to undo your current changes you can click Reload and it will restore what was previously saved.
View ArticleThe LOWIS Software Suite is organized as an n-Tier application consisting of 4 Tiers:
The User Interface (UI) Layer (LOWIS Client Application and other command line tools such as msql.exe and mcsscrip.exe)
The Processor Layer that the UI layer interacts with
The Scan Task Layer that the Processor Layer interacts with
The Communication Layer accessed by the Scan Task Layer
At each of the layers below the User Interface layer, all interaction is done through a common interface that is only used by that specific layer boundary, enabling each layer above to interact with any layer immediately below it with minimal knowledge of which application in that layer it is communicating with.
The Processor Layer
The Processor Layer is where artificial lift specific functionality is placed. For each artificial lift type there are typically at least one processor application (Beam Processor as an example) and two database server applications (Beam DB Server and Web Beam DB Server) associated with it. There may also be other specialized processors as well at this level (the Analysis processors are an example of these).
It is the responsibility of the applications in this layer to either respond to the request it received directly or to determine what information is needed from a device to answer the request. If information from a device is needed it is requested from the appropriate process in the Scan Task Layer to acquire it.
The Scan Task Layer
The Scan Task Layer is responsible for accepting a list of required information from a device and constructing the necessary requests that will have to be made to the device to get the required information. Each scan task application is written to process a specific protocol (mmdbscan.exe is the Modbus Scan Task for example). As such there is typically a specific instance of each application to handle the requests for a specific controller type (ie: there will be a SAMRPC instance for the earlier Lufkin SAM rod pump controller and a different SAMRP2 instance for the newer versions of that controller, both of which are using the same mmdbscan.exe application).
Any behavior differences needed by different specific controller versions are handled by configuration options specified for that instance in the LOWIS Admin configuration system.
When the scan task generates each request message that will need to be sent to the specific controller, it also calculates the number of bytes that the resulting reply message from the controller should be. This count is sent along with the request message to the appropriate Communications Layer application to actually perform the communications interaction.
The Communications Layer
The Communications Layer applications are responsible for the actual interaction with the device to get the data desired by the Application Layer. As such, each Communications Layer application performs a particular mode of communication interaction. As was the case in the Scan Task layer, specific behavioral requirements for a particular communication channel are specified as configuration options in the LOWIS Admin configuration for that communication channel instance.
In LOWIS, regardless of what type of communication mechanism is used, a COM port is assigned to that communications channel. If the communications is done via RS-232 style communications, this will typically be a real COM port on the machine, otherwise this is a virtual COM port used to group the appropriate configuration information needed for communication with the devices on the communications channel.
LOWIS has the following Communications Layer applications to support the specified types of communications channels:
Serial Port (mscan.exe) -This application would be used for talking to devices directly accessible via RS-232 from the machine, or device reachable via a radio network whose transmitter is directly connected to via RS-232 to the machine. For more information see Radio Communications (mscan.exe)
TCP/IP Port (mipscan.exe) - This application would be used for talking to devices that are directly reachable via an TCP/IP address/port combination or for talking to devices reachable via a radio that is accessed via a TCP/IP access device (an ARCOM, IP to Serial, or terminal server type of device). For more information see TCP/IP Communications (mipscan).
OPC Port (mopccldr.exe) - This application is used for talking to devices which have their communications controlled via an OPC Server. OPC originally was an abbreviation for OLE (Microsoft Object Linking & Embedding) for Process Control, but now has become a non-abbreviation identifier since OPC supports underlying technologies aside from just OLE. OPC consists of specifications that allow any software that follows the specification to be able to get data from any device that has an OPC server that can talk to it. For more information see OPC Communications (mopccldr.exe).
View ArticleFor background information see LOWIS Server Architecture Overview
In LOWIS, the mscan.exe program is used to communicate with devices that are connected via radios, where the radio transmitter is directly connected (or made to appear as if it is) to the LOWIS server. For devices where the radio is connected to a TCP/IP device, please see the TCP/IP Communications (mipscan.exe) article which discusses how to configure these.
Radio transmitters are typically connected to a server via a RS-232 connection, and require more tuning than other communication mechanisms before functional. Part of this is due to the transmitters normally being powered down and requiring a small period of time to become fully functional again once the port has been activated (which signals the transmitter to start powering back up). The RS-232 port was originally meant to be used for direct connections between devices, so the port has no concept of needing to wait for the device on the other end to be ready to receive data. This can be shown with the following image, where the line indicates the transmitters ability to transmit and and a visualization of the data being transmitted.
In this image the blue background indicates that there is a period after the port is initially activated until it is ready to be able to accurately transmit data. The yellow tinted section is to indicate when data is being transmitted, and the green tinted section indicates that the transmitter is powering back down to a low power state.
Key On
This powering up period (blue) is configured as the KeyOn period in LOWIS and is meant to specify the amount of time (in milliseconds) until the signal coming off of the transmitter is strong enough to produce a valid signal. The Key On period also should account for how long it may take the receiving device to notice the transmitters signal and switch itself into a receiving mode. This is why the Key On value is specified in the device configuration in LOWIS instead of just on the COM port.
Key Off
For some transmitters, there might be a small delay between when data is handed to it for transmission before it is actually sent out. In some cases, once the RS-232 port indicates that it has finished its request the transmitter might start to immediately start powering down, not waiting for the yet to be sent data to be sent first. To account for this, there is a configuration setting in LOWIS called KeyOff to delay the signaling to the RS-232 port for it to signal that the transmission is complete to allow these extra characters to be transmitted cleanly. Note that this value is also configured in LOWS on each device instead of just on the COM port.
Read/Write Constant/Multiplier
There are four configuration settings in LOWIS that are used for determining how long the mscan.exe process will wait for the reply to be sent from the device, two for read requests and two for write requests. For both of these sets there is a base value and a per character value. For reads, the Read Constant value is the base amount of time to wait and the Read Multiplier is the expected amount of time each character will take to be transmitted. For writes, these are Write Constant and Write Multiplier. The Read values are used both for the transmission as well as for the receive time waited.
The idea behind each of these is that the constant value should be the amount of time it will take the device to process the request and prepare the answering reply while the multiplier portion will account for the amount of time it will take for the reply to actually be transmitted. As mentioned in the LOWIS Architecture article, when a request message is prepared in LOWIS it also calculates the number of characters that are expected to be in the reply. This count is what the multiplier is applied to in order to determine the transmission time for the reply. As a result of these, the mscan.exe process will wait the Constant + multiplied time amount before ending listening, if the reply has not already been fully received before then.
Baud Rate/Data Bits/Parity/Stop Bits
RS-232 ports are performing a digital to analog conversion when sending data and an analog to digital conversion when receiving the data. As such, they need to know how to interpret the voltage signals they are receiving to convert them into their digital equivalents. This is the purpose of the baud rate, parity and stop bit settings. In general, all that really needs to be known is that both sides of the connection need to be using the same settings and that there is no way for the RS-232 ports to determine the settings from the communication stream itself.
In more detail, baud rate specifies the rate of bits per second that the devices should be sending/receiving at. A 9600 baud means 9600 bits per second, or roughly 1200 bytes per second (the roughly will make more sense by the end of this section). The higher the baud rate, the higher the throughput; however, it also makes for a smaller window in which a signal fluctuation can affect how the data is interpreted. Baud rates are typically specified in groupings of 300, with typical settings being 110, 300, 600, 1200, 2400, 4800, 9600, 14400, 19200, 38400, 57600, and 115200
Data bits indicates how many bits of data are to be sent in each packaging of bits that are sent. 8 data bits is typical, but on some systems 7 bits may be used.
Stop bits indicates the number of bits that are inserted after each byte of data has been transmitted to provide for better error recovery. Typically only 1 stop bit is specified, although some systems allow 1.5 (an extra long bit) or 2 to be specified.
Parity specifies whether an extra error detection bit is to be included with each byte of transmitted data and if so which calculation will be used to check this bit. Settings for parity can be among the following:
None (typical) - no parity bit is added
Even - a bit is added whose value will make the total of on bits in the sequence into an even value
Odd - like Even, but will make the count an odd value
Mark - a bit is added that is always on
Space - a bit is added that is always off
For an N-8-1 setup (No Parity, 8 data bits, 1 stop bit), this will mean that for each transmission packet there will be
1 start bit (always included)
8 data bits
1 stop bit
or 10 bits of transmission for every 8 bits of actual data, meaning that it is 80 \% efficient. If parity bits are added the total will become 11, or 72\% efficient. So for a N-8-1 connection the throughput will be 960 bytes per second (9600 bits per second / 10 bits per byte).
Knowing the baud rate/data bits/parity/stop bits of a COM port allows someone to calculate the read/write multipliers mentioned earlier in this article and is why the multipliers and constants are configured in LOWIS for the COM port as well as the baud rate/data bits/parity and stop bit values.
RTU Error Bytes and Protocol Aware
The RTU Error Bytes and Protocol Aware settings are included at the Scan Task level even though they are related to radio communications.
As mentioned before, when a request is built within LOWIS, we also calculate how many bytes should be in the reply and we wait for that many bytes to be returned (or until the calculated time for receiving the message has passed) before we stop listening for more bytes. As radios are interpreting radio waves, they might either hear transmissions from other radios or they might think they are hearing a transmission when it is really just radio noise. In either case, what is received by LOWIS might include characters that were not intended to be interpreted. If these extra characters happen to occur at the start of the received reply, this will result in the final characters of the actual reply being missed as the expected byte count has already been received. The RTUErrorBytes setting allows for additional bytes to be included in the expected receive count to work around this issue. The RTUErrorBytes setting is a value for the number of extra bytes to be listened for before considering the reply to be complete. While this will allow for correct reception in those cases where error bytes are actually in the reply, the downside is that if in general error bytes are not received then there will not be any extra bytes received to complete the expected byte count, meaning that receives will only be ended after the calculated time for reply has been exceeded, making the overall performance of the COM port slower.
To work around the issue of having to timeout for non-existing error bytes, the communication layers in LOWIS (where appropriate) have been configured to be able to understand certain protocols to know how to detect where the actual reply is in the received data stream. This functionality is enabled by turning on the ProtocolAware setting in the associated scan task in the LOWIS configuration. This value works in conjunction with the RTUErrorBytes value in that it will look within the first few bytes (or those few bytes plus the setting for RTUErrorBytes count of bytes) for the beginning unique signature of the reply message. If it finds itin those bytes, it will discard the extra bytes found before that point and then read only for the original expected byte count of the reply message. This means that it will stop listening once the completed message has been received instead of waiting for the extra bytes that might not arrive. If the expected start of the message is not found within those few bytes, it will stop listening at that point and immediately return instead of waiting for the remainder of the characters to be received before timing out. In both of these cases, overall COM port throughput is enhanced.
View ArticleFor background information see LOWIS Server Architecture Overview.
In LOWIS, the mipscan.exe program is used to communicate with devices that are reachable via a TCP/IP address/port identification. There are two different categories of devices that can be reached via TCP/IP: devices that truly support IP access and devices that are reached via an IP to Serial device/gateway.
The advantage of TCP/IP communications is that the TCP/IP protocol layer guarantees the integrity of the data that is transmitted, so error detection and recovery information is no longer needed to be included in the provided data stream. Note that this guarantee only is effective for data while it is within the confines of the TCP/IP system, this does not mean that data received via a radio network and then sent over TCP/IP is guaranteed to be correct. It just means that what was was received on the TCP/IP end point is the same data as was initially added on the TCP/IP origin point.
True TCP/IP Devices
Communications to true TCP/IP devices tends to be much faster and reliable than radio devices, due to the entire communications chain being done in a digital fashion instead of requiring digital to analog to digital conversions.
IP To Serial Devices
IP To Serial Devices are slightly more complicated to setup than true TCP/IP device as these are typically used to access a radio transmitter that is not able to be directly connected to the LOWIS server. This means that all of the various transmission errors that were mentioned in Radio Communications (mscan.exe) can still occur but are no longer under the direct control of LOWIS. All of the settings that control how the radio communications will occur have to be configured within the IP to Serial device itself as LOWIS can no longer influence those settings.
TCP/IP Com Port Configuration
To add any kind of TCP/IP accessible device into LOWIS, you need tonavigate to the LOWIS Admin Configuration and locate Common, then Server, and then right click on Comm Ports and select Add Section. For most devices you will want to select Serial Over IP Port (unless all of the devices on this Com Port are going to be Modbus TCP devices - see Modbus TCP below for requirements for this - in which case Modbus TCP Port should be selected). Once the new Com Port has been added, you can look inside of its configuration and see that Port Type has been set to ARCOMOVERIP (or MODBUSOVERIP if Modbus TCP was chosen).
The other main configuration option to be considered is IP Timeout, which specifies the amount of time (in milliseconds) to wait before considering the reply to have been completely received or to decide that no reply is coming from the device if nothing has been received so far. Note that IP communications will still use the calculated reply size and RTUERRORBYTES setting as there is nothing in TCP/IP that requires the entire reply to be returned as one message and IP to Serial devices will typically return received data in one or two character increments as they are received (given the typical speed difference between an Ethernet network to a serial port).
Modbus TCP
Modbus TCP is a special form of TCP/IP communications to Modbus devices that has specific requirements and provides some specific benefits. The requirements are:
Communications to the Modbus device must be true TCP/IP communications, no IP to Serial can be used
The Modbus device must explicitly support the Modbus TCP mode
Communication to the device is done over port 502.
If all of these requirements are met, then the benefits from doing this are
It is not required to pass in the RTU address in the request as the communication is already going to a specific device. It is good practice to include the correct RTU Address in case the device still requires it and for help in communications debugging. Note that the space used by the RTU address has to be included in the request for correct processing of the message to occur.
The reply from the device will not have the CRC signature added to the end of the reply. As the CRC is included to detect any problems in the data feed from the device which is already guaranteed to be detected by the TCP/IP layer the CRC is no longer necessary.
View ArticleThis article identifies the configuration settings values available for instances of the 8500 Protocol (mbkrscan) scan tasks when they need to use OPC to acquire the data. See LOWIS Configuration for OPC Accessed Devices for context of these settings.
EPCAC
EPCAC is the designation for the tag generation for the older Matrikon ScadaCAC OPC Server style tag generation. The generated tag format looks like:
altaddress.Param:NN.Value;T
where
altaddress is the base OPC tag associates with the device in LOWIS
NN is the register number being requested
T is the type code specifying the datatype (W for an unsigned short as an example)
SOFTTLBOX
SOFTTLBOX is used to identify the Software Toolbox OPC Server style of tag name generation. The generated tag format looks like:
altaddress.Param:NN.Value;T[;OV]
where
altaddress is the base OPC tag associates with the device in LOWIS
NN is the register number being requested
T is the type code specifying the datatype (W for an unsigned short as an example)
[;OV] is an optional argument that tells the OPC Server to ignore what it believes the datatype of the register should be and to use the datatype that was passed in with the tag instead. This override might be necessary because the Software Toolbox OPC Server has what it believes is the correct register map contained within it and it will reject requests for registers when the passed OPC tag does not match on the datatype specified.
View ArticleThis article identifies the configuration settings values available for instances of the AESCAN scan tasks when they need to use OPC to acquire the data. See LOWIS Configuration for OPC Accessed Devices for context of these settings.
AUTELE
At the moment the only supported OPC Tag generation for AESCAN devices is the AUTELE format. The generated tag format looks like:
altaddress.NNNNN
where
altaddress is the base OPC tag associates with the device in LOWIS
NNNNN is the register to be retrieved
View ArticleThis article identifies the configuration settings values available for instances of the Modbus scan tasks when they need to use OPC to acquire the data. See LOWIS Configuration for OPC Accessed Devices for context of these settings.
LUFAUT and AES
LUFAUT is the currently used name for this tag generation style, AES is an older name for it that has been kept for backwards compatibility. The generated tag format looks like:
altaddress.MNNNN
or if Generate Modifiers is active
altaddress.MNNNNTT
where
altaddress is the base OPC tag associates with the device in LOWIS
M is the number corresponding to the requested modbus function code
NNNN is the register number being requested
TT is the type code specifying the datatype (U for an unsigned short as an example)
SOFTTLBOX
SOFTTLBOX is used to identify the Software Toolbox OPC Server style of tag name generation. The generated tag format looks like:
altaddress.MNNNN
or if Generate Modifiers is active
altaddress.MNNNN@TTTTT
where
altaddress is the base OPC tag associates with the device in LOWIS
M is the number corresponding to the requested modbus function code
NNNN is the register number being requested
TTTTT is the type word specifying the datatype (@Word for an unsigned short)
View ArticleFor background information see LOWIS Server Architecture Overview
In LOWIS, the mopccldr.exe program (referred to as OPC Manager) is responsible for communications to devices that use an OPC Server to provide the data. OPC Servers are software designed to act as an intermediary between devices and any other software that is interesting in getting information from these devices. Using an OPC Server means that multiple applications can get data from the device (as the OPC Server software is what is communicating with the devices) and since OPC is a specification applications that are written to the specification can get data for any kind of device that has an OPC Server to provide data for it.
LOWIS's OPC support was written against the Data Access 2.05 specification, with some pieces of the Data Access 3.0 specification being used to speed things up when they are available on the OPC Server. LOWIS's OPC support mainly operates in the Asynchronous data mode, where LOWIS tells the OPC Server what data it is interested in and how often it needs it to be updated and LOWIS just waits for the OPC Server to provide it the data when the OPC Server gets it. LOWIS can be forced to operate in Synchronous data mode, where LOWIS will actively ask the OPC Server for the data and wait for the data to be provided by the OPC Server when the LOWIS schedule says that the data should be updated.
OPC is unique in the LOWIS architecture in that exists in two different layers of LOWIS - the communications and scan task layers. This is because while OPC is a communications specification (and thus exists in the communication layer of LOWIS), it is also a specification for which generic device support can be written, and the scan task layer is where device support exists. As such, LOWIS supports OPC communications in two different modes - Native to OPC and Pure OPC communications.
As was mentioned in the LOWIS Server Architecture, each layer of LOWIS uses a generalized interface to communicate with the layer that exists below it in the architecture, meaning that in theory every type of device can be communicated with via any communications channel that LOWIS has available to talk to devices. It is only when we are communicating via OPC that this theory falls apart, mainly because of how data is identified in OPC versus other communication channels. For radio or ip communications, the outgoing request message and returning reply message are identical (except for the slight differences involved in Modbus TCP as mentioned in the TCP/IP Communications article). For OPC the request is completely different.
OPC Servers actually present the data as individual elements instead of returning the data as a stream of bytes that have to be interpreted by the consuming application (this is because the OPC server has already done the consuming and parsed out the data from the replies it receives from the devices). As a result, when talking to an OPC server each item of data to be retrieved is identified by something called an OPC Tag which is just a string identifying the piece of data that is desired. Typically in an OPC server these tags are given readable and meaningful names, so you can ask the OPC Server for the CasingPressure value instead of having to know that it happens to be stored in register 524 on the device. The OPC specifications just say that the OPC server will expose available data via OPC Tags and the OPC client applications will have to know which tags they need to request to get the data desired by the user, there is no specification that specifies how these OPC Tags are structured. Indeed, it might be necessary to configure in the OPC Server each tag that it will need to provide data for.
This means that each OPC Server vendor has the ability to design their own OPC Tag naming conventions meaning that OPC Servers from different vendors that support the same device might have completely different OPC Tags for the same data from a device. For this reason LOWIS has certain identifiers that will need to be specified to tell LOWIS how the OPC Server that it is to communicate with does its tag structuring.
In addition to these explicitly created OPC Tags, many OPC servers also support a pseudo OPC Tag syntax in that the OPC Tag is able to be parsed by the OPC Server such that it can determine from the tag that was passed to it what information needs to be pulled from the device and how to pull it without that tag having been explicitly specified in the OPC Server. This functionality allows for all (or nearly all) registers available in a device to be accessible with minimal configuration in the OPC Server. As an example, lets consider that we configure a device in the OPC Server named Radio5.WellHead9 that the OPC Server knows is accessible on the radio connected to COM Port 2 and is a modbus device with an address of 13. If the OPC Server was then to receive an unexpected OPC Tag request for Radio5.WellHead9.Value.40100S, it could look at this tag and see
Radio5.WellHead9 - it recognizes this as a configured modbus device on COM Port 2 with an address of 13
.Value - it sees this and knows that a structured modbus register request will follow
.40100S - it knows this is a structured modbus request and it parses it as function code 4 (the 40000), register 100 (the 100) and the data to be returned will be a signed 2 byte integer (the S)
All of this allows the OPC Server to generate the correct request to go out to the device and process the reply received from the device. If LOWIS has built in support for Native to OPC for a specific controller type, LOWIS uses this automatic tag generation methodology to translate the request that it normally would have sent out to a device into the appropriate OPC Tags for the OPC Server it going to use to talk to the device instead. This translation to OPC is done in the appropriate controller's scan task layer executable, which also handles repackaging the returned OPC data into a format that the normal processing code can process so that it is unaware of OPC being involved.
For pure OPC communications, the device is configured as an OPC device within LOWIS, either a OPC### (the ### syntax indicates a number that typically starts from 001 - so OPC001 as an example) or xOPC## (where the x is the LOWIS letter code for the artificial lift type employed, SOPC01 as an example of a Submersible OPC device). For these devices every piece of data to be retrieved must be explicitly specified. For the artificial lift specific OPC versions, there are also some analog and discrete point types that must be configured for the built in functionality to operate correctly.
View ArticleWith the introduction of the configurable client file download path on the LOWIS client updater, version 450 and later, some customers expressed an interest in using a network location as their Client File path. Due to Microsoft .Net security restrictions, the operating system requires that any network location be fully trusted in order to launch an executable file. If a user attempts to use a network path that is not fully trusted, the LOWIS clientfails to launch properly.
documentation on MSDN.
To fully trust a network location you will need to use the Code Access Security Policy Tool provided by Microsoft with .Net. For the LOWIS client you will want to set the 32-bit .Net 4.0 permissions to Fully Trusted for your network location. You can typically find the file by going to the command prompt and putting in "cd \%SystemRoot\%\Microsoft.Net\Framework\4.0.30319\". The argument you want to use to enable Full trust are: "caspol.exe -pp off -m -ag 1.2 -url file://<serverlocation>/* FullTrust"".
An example of running the command to trust a \\STORAGE\LOWIS folder:
For a detailed list of the caspol.exe commands please consult Microsoft’s
View ArticleIn earlier releases of LOWIS, whenever there was an update to the client files users would need to run a new installation executable to update their client software. Starting with LOWIS version 7.0 the executable you install locally, or on your Citrix server, is actually a LOWIS client updater. Instead of directly connecting to the server, it now downloads the client files from that server that will be run locally.
This removes the need for end users to need to regularly update their client by manually installing newer versions, or administrators to update the client installed on the Citrix server. LOWIS will now automatically download any updated client file when you connect to the server.
In version 450, or later, of the LOWIS client updater you can configure the location where it will download the client files in the Settings. The Client Files path will be the main directory under which it downloads the client files from the LOWIS server, or servers, that you connect to.
View ArticleIn version 7.0, new functionality was introduced to download updated client files from the server in order to allow users to obtain client updates without the need to reinstall the client. However, the location that these files download to was fixed. This change allows users to select this download location in the same fashion that they have been able to do with the data store.
A new field was added to the LOWIS client Settings that allows a user to enter a directory that is used to store the client files downloaded from a server, (see figure below). Buttons to set the default path and to browse the local drive were added as well. If no value is entered the client will continue to use the default path introduced with the 7.0 client. Finally, a new command line option “filepath” was added to allow users the ability to pass in a client files location when launching the client.
.
View ArticleForeSite is a modular web based application that integrates with CygNet to obtain real time and historical data for your asset. By combining this information with physics based analysis and big data analytics, the platform provides field-wide intelligence to maximize production.
View Article1 ForeSite Predictive Analytics
Overall Architecture
Server Cluster Configuration
Allows all 3 software components to scale across all nodes
Allows scaling across future nodes that may be added
Total 6 nodes:
2 management nodes
3 data nodes
1 edge node
SROM will be installed on the Edge and Data Nodes
The Edge node will be the Primary SROM with the webserver
SROM nodes will need an NFS mount
Same storage will be mounted to all nodes and path should be same specification
Hardware Requirements
Management Node 1
Management Node 1
Filesystem
Size
Mounted on
/dev/sda2
160GB
/
/dev/sda1
1GB
/boot
/dev/sda3
20GB
/var
/dev/sda4
50GB
/home
/dev/sdb1
200GB
/data
Swap: Size 2X system memory
CPU/cores
8
Memory
32 GB
Management Node 2
Management Node 2
Filesystem
Size
Mounted on
/dev/sda2
160GB
/
/dev/sda1
1GB
/boot
/dev/sda3
20GB
/var
/dev/sda4
50GB
/home
/dev/sdb1
200GB
/data
Swap: Size 2X system memory
CPU/cores
8
Memory
32 GB
Data Node 1
Data Node 1
Filesystem
Size
Mounted on
/dev/sda2
160GB
/
/dev/sda1
1GB
/boot
/dev/sda3
20GB
/var
/dev/sda4
50GB
/home
/dev/sdb1
1TB
/data
Swap: Size 2X system memory
CPU/cores
8
Memory
32 GB
Data Node 2
Data Node 2
Filesystem
Size
Mounted on
/dev/sda2
160GB
/
/dev/sda1
1GB
/boot
/dev/sda3
20GB
/var
/dev/sda4
50GB
/home
/dev/sdb1
1TB
/data
Swap: Size 2X system memory
CPU/cores
8
Memory
32 GB
Data Node 3
Data Node 3
Filesystem
Size
Mounted on
/dev/sda2
160GB
/
/dev/sda1
1GB
/boot
/dev/sda3
20GB
/var
/dev/sda4
50GB
/home
/dev/sdc1
1TB
/data
Swap: Size 2X system memory
CPU/cores
8
Memory
32 GB
Edge Node - SROM (Webserver) and DataStage
Data Node 3
Filesystem
Size
Mounted on
/dev/sda2
160GB
/
/dev/sda1
1GB
/boot
/dev/sda3
20GB
/var
/dev/sda4
50GB
/home
/dev/sdc1
1TB*
/data
Swap: Size 2X system memory
CPU/cores
8
Memory
32 B
* 1TB is not required. 200 GB will be sufficient.
Additional Notes:
The allocations presented above are for a sandbox environment and not typical of a Hadoop cluster deployed on the IBM Cloud or on the IBM BigInsights reference architecture.
Adjustments may need to be made as the workload is further understood.
For reference see ForeSite R1 IBMServerClusterRequirements
Software Components Configuration
SROM RunTime Web/Analytics Server
Software Requirements:
Red Hat Enterprise Linux OS
Python 2.7
Python pip
Spark 2.0.1
Sqlite3
Libsqlite3-dev
Ngnix Server
Docker
Python Packages Dependencies (Use the latest versions for all)
Virtualenv
Django
Pandas
Requests
Py4j
Pydoop
Note: Use the latest version for all
Python Packages Dependencies on Spark Cluster
Pandas
Sklearn
Scipy
Requests
Note: Install on each node
Configuration Requirements
Pythonpath environment variable pointing to pyspark
Environment information required
IP address and host name of the SROM server
Spark master
Hive thrift server URL and port number
Spark installation path
Orchestration layer’s web service hostname
Orchestration layer’s web service port number
Special Instructions for Python Package
Command to install these packages: sudo pip install pandas sklearn scipy requests
In case pip is not available in the machine: sudo yum install python-pip
In case above command to install pip doesn't work:
curl " https://bootstrap.pypa.io/get-pip.py " -o "get-pip.py"
sudo python get-pip.py
Additional Nodes
Spark is able to access the Hive DB
Data Integration Server
Software List:
Infosphere Data stage
Special Instructions:
Client data jar file must be SFTPed to server
Big Data Cluster/RHEL 7.2/Min 2 RHEL Node cluster
Software List:
SROM Java (latest version)
BigInsights 4.2
Ranger 0.5.2
Phoenix (4.6.1)
Titan (1.0.0)
Apache SystemML (0.10.0)
Ambari (2.2.0)
Flume (1.6.0)
Hadoop (2.7.2)
Hbase (1.2.0)
Kafka (0.9.0.1)
Knox (0.7.0)
Slider (0.90.2
Solr (5.5)
Spark (1.6.1)
BigSheets (4.2)
Big SQL (4.2)
Big R (4.2)
Text Analytics (4.2)
Python (latest 2.7 version)
Pydot (1.0.28) *specific version required
Pyparsing (1.5.6) *specific version required
Numpy (latest version)
Scipy (latest version)
Scikit_learn (latest version)
Orange (latest version)
Ibm-db (latest version)
Pandas (latest version)
Statsmodels (latest version)
Dill (latest version)
View ArticlePlease visit the CygNet Knowledge Base for more information.
View ArticleThere are times when Wellscribe is missing some WSM data. This could be a completion, a well, a job, an event, or some other data that is expected to be in Wellscribe. What could cause this to happen?
Before we answer that question, we need to discuss how data is brought across from WSM to Wellscribe. WSM data is sent to Wellscribe on a delta basis, so only the data added, modified, or deleted since the last successful sync will attempt to be brought across each time the servers sync. For example, assumea successful sync occurs at 1PM. The next time a sync is attempted for the server, then it will only attempt to bring across data that has been modified after 1PM. Wellscribe brings data across in a specific order because there is subsequent data that is dependent on the other data being there. For instance, jobs are transferred before events because there may be new events looking at new jobs. These events would fail if the job they are referencing is not in the Wellscribe database. In the event that there is more than one of specific object (5 jobs) being sent across and one of those objects fails for whatever reason (say the second job), the remaining objects will not even attempt to be sent across (jobs 3,4, and 5 will not even attempt to be sent across) and the entire sync will be marked as a failure. All objects that were attempted to be sent across will be sent again in the next sync.
With this knowledge in mind, the most common reason WSM data might be missing from Wellscribe is that a it is missing data that is being referenced. For instance, if a trucking unit is added to a WSM database and a job is added that is referencing that trucking unit but the trucking unit is not added to the Wellscribe Master database, then the sync will fail and the job will not be accessible because Wellscribe is not aware of the new trucking unit that the job it tries to bring across is referencing. All events on this job will also fail because the job they are referencing is not in Wellscribe.
Another reason that WSM data might be missing from Wellscribe is that it might be intentionally filtered. In Wellscribe Admin, an administrator can filter the jobs brought across based on their type and status.
If the server is successfully syncing but data is still missing, then you can simply update the WSM object to make it sync again. You can modify it, commit it to the database, and modify it back to its original value, andit will still attempt to sync the data again.
If the server is not successfully, then you would need to look at the logs to figure out why it is failing to successfully sync. Searchfor items with"SyncManager", then find the server that is failing.
View ArticleThere appears to be a fairly large amount of data in WSM that comes from a drop down box. You might wonder where this information coming from and how can it be modified. The data that is used to populate these drop down boxes is referred to as reference data. Reference data is data that is referenced, and it is not something that is commonly modified. There areeven some columns in reference data that references other reference data. Some of this data can be modified, added to or removed from display, while other are not accessible in this way from the client. We will discuss where a user can modify this information shortly.
Typically, these drop down lists are populated with data from the corresponding table with only one standard filter: RefUserDeleted must be equal to 0. For instance, when you add a job either through a wizard or Job Management, the available Job Types in your drop down box are being pulled from a specific reference data table, and all of the available Job Types will be displayed unless the RefUserDeleted column has a value of 1.
Some drop down boxes are dependent on other provided information. This can be seen with different trucking units being available when you select a different service provider.
A user can modify the reference data in their system by going to Start > Configuration > Reference Data Maintenance. From this grid, a user has access to a predetermined list of reference data split of by its category (or table). If you find the "Ref-Job Type" row and select the Table Data, then it will display all of the Job Types. There are a lot more Job Types in thesystem I am on, then there are in the drop down.This is because there are a high number of Job Types with a Ref-User Deleted equal to 1.
If your company wanted to add their own Job Type, then they could come here and do just that. Reference Data that a user might add or modify here includes:
Job Type
Job Reason
Trucking Unit
Event Types
Failure Data Types
Foreman
Fields
Reservoirs
Pricing Data
Well Depth Datum
Job Type to Event Type
More
There are two type of data that would appear in drop down box that are not in the Reference Data Maintenance grid: AFE and Business Organization (also referred to as Vendor, Service Provider, or Manufacturer). AFEs can be managed by going to Start > Configuration > AFE Configuration.Business Organizations can be managed by going to Start > Catalogs > Vendor Catalog. It is very possible that either or both of these will be very important to modify, so it is good to know where they can be found.
As for the rest of the drop down boxes, they are mostly predefined and you must contact Weatherford to add to those.
View ArticleThere are 3 new options in LOWIS Admin that are used to control asynchronous update rates. Those options and their usage are as follows:
AsyncUpdateCacheSize LOWIS Client will receive a notification from the server when a record in the database has been updated. The LOWIS Client will not update the values from the database until a certain number of available updates has been reached. AsyncUpdateCacheSize controls how many notifications must be accumulated before the records are updated in the client from the database.
AsyncUpdateTimeOut This option controls the maximum amount of time in seconds that LOWIS Client will wait to reach the cache size threshold that was specified by AsyncUpdateCacheSize. If the timeout is reached before the cache size minimum is reached, the records that have already accumulated will be updated in LOWIS Client and the accumulated updates will be cleared out.
AsyncUpdateWaitTime This specifies the minimum amount of time in seconds that LOWIS Client will wait before updating the accumulated records. This prevents constant refreshing during heavy server usage times, such as during scheduled scans.
Note that a manually-demanded scan will override all of the above options, and the LOWIS Client will update the record as soon as the information is available.
The above options can be found in the following processor sections:
AlarmProc
MonitorProc
BeamProc
SubProc
WellTestProc
InjectProc
PCPProc
GLDProc
PGLProc
Previously, the options were located in production.dso.xml, and any modifications to these update rates would require a hot fix to deploy. Users can now update these values without the need for manually modifying LOWIS files.
View Article