Then you should check out the following video on YouTube:
Well invested 2:45 minutes.
Best Regards,
Andre
Then you should check out the following video on YouTube:
Well invested 2:45 minutes.
Best Regards,
Andre
The following video on YouTube provides a nice and comprehensive high level overview of SAP Gateway and OData.
So if you want to explain to somebody what SAP Gateway and OData are in just 1:45 minutes you can share this link.
SAP Gateway and OData - YouTube
Best Regards,
Andre
This blog would have not been possible, but for the awesome community that is SCN. I would like to thank every contributor ever for helping me out in my hours of need!.
I had tremendous help in navigating through my current requirement thanks to the below blog posts.
How to consume an OData service using OData Services Consumption and Integration (OSCI)
Thank you. Andre Fischer
Consuming an External RESTful Web Service with ABAP in Gateway
both these blogs helped me understand the intricacies of the functionality that is Consuming an OData Service.
this Blog can be considered as an extension of the Blog by Paul J. Modderman.
we can consume a data service by using the method CREATE_BY_URL of the class CL_HTTP_CLIENT, but when authentication is involved this method was ill suited for it.
The CREATE_BY_DESTINATION method however enables us to store the Credentials in a more standard and secured fashion.
The Requirement:-
I needed to access an OData service that was exposed by HANA . It is required that we trigger this service from the ECC system and process the result.
The user would be logging in to ECC directly and not via portal, thus it would require the use of an RFC destination Login Credentials.
The Process:-
Step 1.
we have to create the RFC connection in SM59 as below.
Step2.
Now that we have created the RFC connection we proceed to the creation of the HTTP client .
to create the client we use the attached code. CL_HTTP_CLIENT.txt.
DATA: lo_http_client TYPE REF TO if_http_client,
lv_service TYPE string,
lv_result TYPE string.
"xml variables
DATA: lo_ixml TYPE REF TO if_ixml,
lo_streamfactory TYPE REF TO if_ixml_stream_factory,
lo_istream TYPE REF TO if_ixml_istream,
lo_document TYPE REF TO if_ixml_document,
lo_parser TYPE REF TO if_ixml_parser,
lo_weather_element TYPE REF TO if_ixml_element,
lo_weather_nodes TYPE REF TO if_ixml_node_list,
lo_curr_node TYPE REF TO if_ixml_node,
lv_value TYPE string,
lv_node_length TYPE i,
lv_node_index TYPE i,
lv_node_name TYPE string,
lv_node_value TYPE string.
************************************************************************
* lv_ destination will be name of the RFC destination we created in SM59
************************************************************************
CALL METHOD cl_http_client=>create_by_destination
EXPORTING
destination = lv_destination
IMPORTING
client = lo_http_client
EXCEPTIONS
argument_not_found = 1
destination_not_found = 2
destination_no_authority = 3
plugin_not_active = 4
internal_error = 5
OTHERS = 6.
IF sy-subrc <> 0.
* MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
* WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
*************************************************************************************************
* we need to build the URI, this is the part in the OData Service that comes after the port number
* This includes the Path and Query string for the service that is being called on the host.
* lv_uri holds the path and query string
*************************************************************************************************
CALL METHOD cl_http_utility=>set_request_uri
EXPORTING
request = lo_http_client->request
uri = lv_uri.
lo_http_client->receive(
EXCEPTIONS
http_communication_failure = 1
http_invalid_state = 2
http_processing_failed = 3 ).
**************************************************
* Making sense of the result parsing the XML
**************************************************
lv_result = lo_http_client->response->get_cdata( ).
lo_ixml = cl_ixml=>create( ).
lo_streamfactory = lo_ixml->create_stream_factory( ).
lo_istream = lo_streamfactory->create_istream_string(
lv_result ).
lo_document = lo_ixml->create_document( ).
lo_parser = lo_ixml->create_parser(
stream_factory = lo_streamfactory
istream = lo_istream
document = lo_document ).
"This actually makes the XML document navigable
lo_parser->parse( ).
DATA: lv_name TYPE string.
"Navigate XML to nodes we want to process
*lo_weather_element = lo_document->get_root_element( ).
lv_name = 'content'.
lo_weather_element = lo_document->find_from_name( lv_name ).
lo_weather_nodes = lo_weather_element->get_children( ).
"Move through the nodes and assign appropriate values to export
lv_node_length = lo_weather_nodes->get_length( ).
lv_node_index = 0.
WHILE lv_node_index < lv_node_length.
lo_curr_node = lo_weather_nodes->get_item( lv_node_index ).
lv_node_name = lo_curr_node->get_name( ).
lv_node_value = lo_curr_node->get_value( ).
ADD 1 TO lv_node_index.
ENDWHILE.
Hope this Helps!, let me know if i can clarify further.
Peace!!!!
Now that annotations are making UI5 development easier by using Smart controls, it is important to learn how to add these annotations to your service. SEGW does not yet allow you to add most of the annotations. Till SEGW inherently provides that feature, here is how you can do it using code.
Step 1. Goto you MPC_EXT class
Step 2. Redefine Define method.
Step 3. Write this code.
super->define( ). "Ensure you call the parent metadata
lo_entity_type = model->get_entity_type( iv_entity_name = 'EmpDetail'). "Your Entity Name
lo_property = lo_entity_type->get_property( iv_property_name = 'DateOfHire'). "Property inside your Entity
lo_annotation = lo_property-/iwbep/if_mgw_odata_annotatabl~create_annotation( /iwbep/if_mgw_med_odata_types=>gc_sap_namespace ). "SAP's annotations
lo_annotation->add( iv_key = 'display-format' iv_value = 'Date' ). "Specific annotation you want to add.
This will result in
This blog shares our experience of upgrading a 3-tier NetWeaver Gateway landscape, pointing out the challenges we faced and how we were able to solve them.
Existing 3-tier landscape with NetWeaver Gateway 2.00 SP 07 (NetWeaver 7.31 SP 09) in a central hub deployment model. Applications are connecting to a CRM 7.0 EHP1 system with NetWeaver Gateway 2.00 SP 07 backend components.
Existing 3-tier landscape with SAP Gateway 7.40 SP 13 in a central hub deployment model. No changes to backend systems.
Execute a in-place upgrade. Start with sandbox systems, which are recent copies of respective production systems.
According to SAP note 1830198 systems can be independently upgraded, upgrading Gateway doesn't require one to upgrade the backend components of the connected backend systems, assuming sufficient SP levels exist both in the Gateway and the backend systems. In our case we met the SP level requirements. The plan was not to upgrade the backend components.
As soon as we had upgraded sandbox, we realized that our existing Gateway services didn't work anymore. More specifically, none of the services leveraging filter functionality worked. In addition there were issues with currencies that used to work that no longer worked.
Debugging the filter problem we found out that passing filter values got broken by the upgrade, values were being truncated. Looking into it in detail, we found SAP note 2205402 which we applied on both the Gateway system as well as the backend system, as instructed by the SAP note. This however wasn't sufficient. Since the corrections are partly contained in 740/13, we had to also implement SAP notes 2232883 and 2241188 on the Gateway system. Even that wasn't sufficient, we had to also implement SAP note 2245413 in the backend system.
Applying the SAP notes fixed the issues with filter functionality. The issue with currencies is explained in SAP note 2028852. We chose to change the applications in order to avoid the decimal problems described in the SAP note.
In order to apply the SAP notes required to fix the issues with filtering, we had to also update the backend components to 2.00 SP 11. The new plan is to execute the in-place upgrade of NetWeaver Gateway and update the backend components.
I'm sure breaking compatibility or interoperability wasn't on SAP's radar but it happened. I have contacted SAP Gateway Product Management but I haven't yet been provided with an official explanation. In our case a simple technical upgrade of NetWeaver Gateway turned into a full-fledged upgrade project.
Take everything with a grain of salt, even official information can't be trusted. Test and validate everything yourself, preferably in a sandbox environment.
We are currently executing the new plan in our development landscape. I will update this blog should we run into other issues.
(This is part 1 of a 3 part series. See Part 2 and Part 3 for the whole story.)
Did you ever have a late night instant message conversation that went something like this:
It’s no fun to be in that conversation. You know you’re stuck sitting in front of a screen for at least the next 10 minutes. And since it’s your work laptop you know that the corporate internet police won’t even let you browse reddit for cat pictures while you wait for VPN and SAP GUI to load up. More so, you know that whatever this person is yelling about is probably not your fault.
I’ve been there, trust me.
What if your conversation could look like this, instead:
Did you notice Commander Data interject in that exchange? More on that later.
As nerds our jobs often involve performing routine technical tasks for people who use our systems. Maybe you reset a lot of passwords, check the status of integrations, or respond to a support inbox. You probably have loads of different communication tools at your disposal. Chat, email, carrier pigeons…whatever gets the job done. If someone needs your help they’ll generally find a way to get in front of you. Digitally or otherwise.
One of the coolest communication tools I’ve worked with in the last couple years is Slack. It’s got individual conversations, group chats, categories, and anything you’d expect from a team chat tool. It’s quickly overtaken email as my preferred method of talking with colleagues.
Except it’s way more than chat. Slack allows you to use pre-built integrations to look at your Jira tasks, GitHub commits, and thousands of other things. What’s even better: you can make your own integrations that interact with its web API. Which makes it the perfect thing to plug into your SAP Gateway to make use of the REST APIs you’ve already created for other purposes.
In my next couple posts, I’ll show you how to make exactly what I did above using (nearly) free tools.
If you're not using Slack already, you can get a free account. It's very simple and straightforward. Once you've got an account, follow these steps to set up the Slack end of this chain:
What you just did set it up so that Slack will respond to any message that starts with "/ask-sap" by sending an HTTP POST to the URL you provided in the settings. The format of the POST it sends will look like the "Outgoing Data" section that you saw in the setup process. For this demo, the most important pieces are the token and text fields.
That's it! You now have a Slash Command available in any of your Slack channels. It won't do anything yet, but that's what we'll set up in the next section.
On to Part 2!
(This is Part 2 of a 3 part series. See Part 1 and Part 3 for the whole story.)
In Part 1, we got Slack up and running with a Slash Command that will send an HTTP POST to a specified URL endpoint. In this part, I'll show you how to set up a basic Google App Engine web server in Python to respond to this HTTP POST and format a request for SAP Gateway. From Gateway, we'll output the data that the request asks for and send it back to Slack. I will not be exhaustive of all the features of App Engine - this is an SAP blog, after all - but I'll provide sample code, links to how-tos, and some tricks I learned along the way. The amazing thing is that a super basic implementation is only about 40 lines of Python code!
Now you're ready to code! The easiest way to set up a project for App Engine is do the play-at-home 5-minute version. This will get you a project set up, the right deployment tools installed, and a project folder ready to go. Try it out, test it a few times.
Once you're comfortable with how that works, you can simply replace the code files with code I'll provide below. Note that there are several places in the code where I've put some angle brackets with comments - this is where you'll need to fill in your own solution details. My meager programmer salary won't cover a giant hosting bill because everyone copies my domain/settings and sends all their messages through my server.
First, replace the contents of your app.yaml file with this code:
application: <your-google-app-id>
version: 1
runtime: python27
api_version: 1
threadsafe: true
handlers:
- url: /.*
script: main.app
Very straightforward, not much to comment on here. Just remember to replace the app-id section at the top.
Next, create a file called main.py (or replace the contents of the existing one) with this code:
import webapp2
import json
from google.appengine.api import urlfetch
class SlackDemo(webapp2.RequestHandler):
def post(self):
sap_url = '<your-sap-gateway>/ZSLACK_DEMO_SRV/RfcDestinationSet'
json_suffix = '?$format=json'
authorization = 'Basic <your-basic-credentials>'
slack_token = '<your-slack-token>'
request_token = self.request.get('token')
if slack_token != request_token:
self.response.headers['Content-Type'] = 'text/plain'
self.response.write('Invalid token.')
return
text = self.request.get('text')
details = {}
if text.find('shout') > -1:
details['response_type'] = 'in_channel'
response_text = ''
if text.find('test') > -1:
rfc_destination = text.split()[-1]
request_url = sap_url + "('" + rfc_destination + "')" + json_suffix
headers = {}
headers['Authorization'] = authorization
response_tmp = urlfetch.fetch(url=request_url,
headers=headers,
method=urlfetch.GET)
response_info = json.loads(response_tmp.content)
response_text += 'Sensor sweep indicates the following:\n'
response_text += response_info['d']['Destination'] + ' - '
response_text += response_info['d']['ConnectionStatus'] + ' - '
response_text += str(response_info['d']['ConnectionTime']) + ' ms response'
else:
response_text += "I'm sorry, Captain, but my neural nets can't process your command."
details['text'] = response_text
json_response = json.dumps(details)
self.response.headers['Content-Type'] = 'application/json'
self.response.write(json_response)
app = webapp2.WSGIApplication([
('/slackdemo', SlackDemo),
], debug=True)
I'll do a little explaining here.
Build the project and deploy it to the web site you're using. Now we're ready to create the Gateway service that will do the simple RFC test that Commander Data did in part 1.
Off to part 3!
(This is Part 3 of a 3 part series. See Part 1 and Part 2 for the whole story.)
In the last 2 posts we paved the way to get some data out of SAP from Slack. First, we set up Slack to send out a request when a user enters a Slash Command. Then, Google App Engine handles that request and forwards it to Gateway. Now Gateway needs to respond back to Google with the RFC connection test that the Slack user asked for.
Here's a simple OData service setup that will test an RFC connection on the ABAP system. My intention is to inspire you to do other cool solutions - I'm just setting this up to show off quick-n-dirty style to explain concepts. Take this and make something else work for you!
Go to SEGW and create a service. I called mine ZSLACK_DEMO. Here's an example setup of the fields for an entity called RfcDestination:
Then code up the RFCDESTINATIONSE_GET_ENTITY method in the generated class ZCL_ZSLACK_DEMO_DPC_EXT (assuming you kept the same names I used). Make sure you generate the project first, and then do the redefinition process for the method I mentioned. Here's a great document on setting up class-based Gateway services that goes more in-depth.
Here's a simple implementation of an RFC ping method that matches up with the service we created.
METHOD rfcdestinationse_get_entity.
DATA: lv_start TYPE i,
lv_end TYPE i,
lo_ex TYPE REF TO cx_root,
lv_rfcdest TYPE rfcdest,
ls_key_tab LIKE LINE OF it_key_tab.
READ TABLE it_key_tab INTO ls_key_tab WITH KEY name = 'Destination'.
IF sy-subrc IS INITIAL.
lv_rfcdest = ls_key_tab-value.
ENDIF.
er_entity-destination = lv_rfcdest.
TRY.
GET RUN TIME FIELD lv_start.
CALL FUNCTION 'RFC_PING' DESTINATION lv_rfcdest
EXCEPTIONS
system_failure = 1
communication_failure = 2
OTHERS = 99.
GET RUN TIME FIELD lv_end.
IF sy-subrc IS INITIAL.
er_entity-connection_status = 'OK'.
er_entity-connection_time = ( lv_end - lv_start ) / 1000.
ELSE.
CALL FUNCTION 'TH_ERR_GET'
IMPORTING
error = er_entity-connection_status.
ENDIF.
CATCH CX_ROOT INTO lo_ex.
er_entity-connection_status = lo_ex->get_text( ).
ENDTRY.
ENDMETHOD.
Maybe not production quality, but ready to do the trick. For a good connection, it will give you an OK ConnectionStatus and a number in milliseconds for the response time. For a bad connection, it will respond with the RFC error in the ConnectionStatus field. Our Google App Engine web server receives this and plugs it into a text response to Slack. When Slack receives the response it puts the text into the chat window for the user who requested it.
Assuming all the pieces of the chain have been done correctly, you can activate your slash command. Try it with something like "/ask-sap shout out a test of RFC <your_destination>". If you're all set, the chat window will shortly return to you with a response from SAP.
This was a very simple prototype implementation - but there are so many things you could do! I'll leave you with a brain dump of ideas to inspire you beyond my work.
Go make cool stuff!
Introduction
More than one user has access to the same entity and hence the same data in SAP Fiori applications. So how to prevent parallel edit access for users so as to prevent the data from being overwritten in flight.
There are three ways to lock records to manage the concurrency.
1. Pessimistic
In this scenario the application assumes that concurrent writes will occur and hence protects it by aggressively locking out resources. This can lead to deadlocks and also have some performance reduction as applications using the resource will have to wait in queue to apply its changes.
2. Optimistic
In this scenario the application assumes that the likelihood of a concurrent write is rare and so allows the operation to continue.
3. Semi-optimistic
This is a combination of pessimistic and optimistic concurrency controls. This kind of solution is used for very specific scenarios.
For Data Concurrency OData recommends implementing Entity tags or ETAG's. Since OData uses HTTP protocols and is stateless, it can use only optimistic concurrency control of data and only that is discussed in detail further in this paper.
Implementing ETags in SAP Netweaver Gateway
SAP Netweaver Gateway allows three ways to implement ETags.
1. A field based ETag (typically a timestamp)
When an entity is typically a subset of a database table and the database has a field that maintains a timestamp to signify a change whenever the record changes, then that field could be included in the entity and can be used as the E-Tag.
In transaction SAP Gateway Service Builder (SEGW), at the Entity Types, you select a field as your ETag property. Save and re-generate the Service.
2. A field based ETag (typically a timestamp)
The shortcoming of the above method is that, there are scenarios where only a few fields from a database is exposed in an entity model. But changes to the DB table in the backend can occur through various sources including background programming, manual changes through backend transactions etc. These backend changes might not have changed the Entity fields at all, but still that will be considered as a change.
In such a scenario the full entity based ETag could be implemented.
Include a Field ETAG with data element HASH160 into the entity. And use that field as the ETag property for the entity.
Once all data for the entity is fetched the hash value calculation happens.
3. Partial entity based ETag
Whatever ETag method that we implement will apply for all operations except Create. There are scenarios where the get_entityset implementation and get_entity implementation of the same Entity might not have same ETag values. This is due to that fact that there are certain information in an Entity which will be costly to fetch during a get_entityset operation. In such a scenario the partial entity based ETag could be implemented.
The implementation of partial entity based ETag is very similar to the full entity based Etag. The only change is the calculation of hash value.
Here instead of passing the complete entity, you remove the values for fields that you do not want to be part of the hash value calculation and then call the calculate hash.
For eg:-
Using ETags in Fiori
Classic Scenario with Etags
During the get entity if a valid etag is provided and the entity has not changed then the server responds with return code 304 (Resource not modified). If the entity has changed then the new set of data is returned with return code 200.
If-None-Match: W/"'DBF5DD4DE0073002917521B7057C0826FC5A7F8E'"
if you do not want to check against an existing e-tag, but get the data anyway you can use '*' to get the data.
If-None-Match: *
Update/Merge/Delete Entity uses the http header variable If-Match
If a valid e-tag is provided, the OData infrastructure checks for the hash value by calling the get_entity and verifies the passed e-tag value. If it is successful the update/merge/delete operation proceeds without any issues. If the e-tag does not match then the server returns status code 412 (Precondition failed.).
Note 1:
As a developer if you want to any how update the data from client (while using the SAP Gateway Client for testing purpose) then you can pass the If-Match value as a '*', which means the client will win and will update the data irrespective.
Note 2:
During the Update/Merge/Delete options in case of a ETag get_entity call is made in the backend before the update call to verify the ETag value.
References
There is already a very fine blog from Martin Bachmann explaining HANA Cloud Platform, the benefits of cloud and HCI Odata Provisioning in general, as well as the scenario for connecting SAP HCI OData Provisioning to an SAP Gateway system here How to connect the SAP Business Suite to the SAP HANA Cloud Platform using Gateway as a Service Trial Edition. However like so many things in the cloud, there are changes already, as soon as its published. I will not re-cover the informational content contained therein again, but I will be showing the updated screens for the most recent version.
If you have already configured HCI Odata Provisioning using Martin's blog, or on your own, and want to skip to the SAP API Management part, please copy down the URL path to your HCI Odata Provisioning Services and go to Part 2 of this blog.
A quick bit on "Why SAP API Management?"
SAP API Management does not replace SAP Gateway, and in fact, relies on SAP Gateway to expose data from SAP Backends. What SAP API Management adds is an enhancement of the capabilities provided by SAP Gateway. It can sit on top of a Gateway deployment in order to provide Secure and Scalable access, through security, and data management policies, as well as providing a developer engagement area. With a deployment on HCP, it is even easier, as a user of SAP Gateway only needs to install/run the IWBEP component of Gateway on their SAP Backend system, and use HCI OData Provisioning on HCP to connect to it, consuming the exposed OData endpoints directly in SAP API Management. Additionally SAP API Management can combine other data sources such as HANA XS or Non-SAP data together with Gateway exposed data, exposed via a single secure endpoint for developers to build impressive Apps.
But this walkthrough will focus on using HCI OData provisioning (hereafter referred to as HCIODP) to consume services exposed by SAP Gateway, and expose them as OData endpoints which will be consumed by SAP API Management as API Proxies.
* Pre-Requisite: An account on an SAP Gateway system accessible via Internet. For this walk-through I will be using the SAP Developer Center ES4 system. Anyone can sign up for an account here: SAP NetWeaver Gateway Developer Center
1. Enabling HCIODP in Trial
Login to trial, check the Services section, under "Integration" i.e. In HCP Trial Account
Click HCI OData Provisioning tile to enter service. Check status of service. If click status is "Not enabled" click the "Enable" button. Wait until you see service status change to Enabled.
2. Configuring HCIODP Destination(s)
Click “Configure HCI OData provisioning” - This should bring up the “Destinations” tab under "Configure HCI OData Provisioning". Click “New Destination” in order to create the destination for the SAP Developer Center ES4 system.
Enter details for the Gateway system. All details, including login, and password will be those which you have registered on the Gateway system. E.g. for SAP Developer center ES4 system, see below:
After you save, wait until details are saved in system, which will be indicated by the configuration screen turning grey and no longer allowing input.
3. Configuring HCIODP Roles
Click the “Roles” tab to configure user access in HCIODP.
Select GW_Admin role, and click “Assign” below in the “Individual Users” section. This will authorize the user to enter the Admin window for HCIODP to configure available services, view logs, or configure Metadata options.
In the popup window, enter the SAP ID login information (P-User, S-User, I#, etc.) you will be using, and click "Assign"..
Repeat this process with the role GW_User, this will assign authorization for a user to consume the services configured on HCIODP (but not to access the Admin window).
Once complete, you should have a user assigned to both the roles GW_Admin and GW_User.
4. Configuring HCIODP Services
Click the “HCI OData Provisioning” link at the top of the window, to return to the HCIODP base screen
Then click “Go to Service” from the base screen for HCIODP.
If everything worked correctly, this will open a new browser tab, for the HCIODP Admin screen. You may be prompted to enter SAP ID credentials, enter the credentials for the user configured in the GW_Admin role. After login the Admin screen should appear as below:
To begin exposing services from Gateway system configured, click the "Register" button at the top of the screen to bring up the “Register Service” screen. Select the SAP Gateway system configured in Step 2 from the drop down list for Destinations, then click the icon of a Magnifying Glass next to “Search Services” to bring up a list of available Gateway Services.
Select the desired service to be exposed to API Management by clicking the empty box on the far left to highlight that selection. E.g. to select “GWDEMO” below:
Note: The box will fill blue when selected, and if you move the mouse cursor away, you will see the entire row is blue when selected. If this is not the case, the row was not properly selected.
Click the "Register" button, to register the selected service in HCIODP. The service should now appear in the list of Registered Services for HCIODP.
Click “Open Service Document” for the newly added service, to test that the service is exposing data as expected. This will open a new browser tab, with service data in OData format. Copy down the URL in your browser bar for the service, this will be used as the Target endpoint for the API Proxy.
Repeat these steps above for each Gateway service you want to expose.
When you have completed registering services, the next step will be creating API proxies in SAP API Management, using HCIODP as the OData Target Endpoints, and the Services as the APIs in this case. This will be covered in Part 2.
For questions, feedback, concerns, feel free to leave a comment, or send us an E-Mail.
This is a continuation from Part 1 where we walk-through setting up HCI OData Provisioning on HANA Cloud Platform against SAP Developer Center'ss ES4 Gateway system. If you haven't already completed this, I recommend going through Part 1 in order to be ready for this blog.
Now that you should have at least one SAP Gateway service exposed in HCI OData Provisioning (hereafter referred to as HCIODP), it's time to set things up so that they can be connected to by SAP API Management. Due to the Platform nature of HANA Cloud Platform, this is remarkably easy fortunately. If you have not yet enabled SAP API Management, please follow the steps outlined here Free Trial of SAP API Management on HANA Cloud Platform is available now!
Once you are set-up, can get started right away.
PART II – Creating API Proxies from OData Endpoints in SAP API Management
1. Creating a "System" connection to HCIODP.
Login to your HCP Trial Account with SAP API Management enabled. Open the Services pane, and locate SAP API Management under the "Integration" section. Launch API Management API Portal by clicking "Access SAP API Management API Portal"
This should load the "Configure" section of SAP API Management by default, if not, click the drop down menu from the top left hand corner and select “Configure”. This is where one generates Target systems for SAP API Management. Click "Create" in the bottom right hand corner to add a system.
This will take you to the Create System window, where you will need to enter the relevant details for the HCIODP system used in Part I.
Title: HCIODP
Details: HCI OData Provisioning system, providing exposed Gateway Services
Host: enter the Base URL from the Service Document URL you saved from Part I. (The format will be gwaas-<userID>trial.hanatrial.ondemand.com)
Port: 443
Check Use SSL
Authentication Type: Basic
Service Collection URL: /CATALOGSERVICE/ServiceCollection
* The Catalog URL is something unique to SAP Gateway systems, which allows SAP API Management to "Discover" available services from the Catalog Service. It is not used for Non-SAP systems.
After checking details entered, then click “Save”. Once system has been saved, click “Launch” hyperlink at bottom left-hand side of screen, this will launch a new tab, opening the SAP API Management Destinations area on the HCP Cockpit, not to be confused with the Global HCP Destinations. Here you will add Authentication settings for the newly created System. Click the name of the system you newly created, e.g. HCIODP and click the "Edit" button at the bottom of the page.
User: <Use the accountID that was added under GW_User in Part I>
Password: <Password for accountID added under GW_User in Part I>
Next, Click “New Property” under Additional Properties, and enter Key: TrustAll || Value: true
Then click “Save”.Changes can take up to 5 minutes to save (but usually will only take a few seconds).
2. Creating an API Proxy using HCIODP Service
Close HCP Cockpit to return to API Management API Portal window. Select drop-down menu list from top left hand corner and select "Manage" to open API Proxy page.
In the API Proxy window, click “Create” at the bottom-right hand corner, and select “Create API” from popup selection to bring up the “Create API” screen.
Provider System: <Select system created in step 2>
Click “Discover” button to see a list of all services exposed by HCI-OData Provisioning.
Select a Service and click “OK” this should auto-fill all the API Information for you.
* Link Target Server will create the API with all System information determined by the Provider System you determine, and all pathing linked as a virtual path. This allows for easy transition between environments (such as Dev, Test, Prod) for the API Proxy. Documentation flag determines whether SAP API Management attempts to pull existing documentation from the Service, and is only applicable for SAP Gateway endpoints.
Click “Create” to generate the API Proxy. If everything was entered correctly, the API Details screen should come up, with all Resources (and corresponding descriptions of the model) created automatically from the Metadata.
Then click “Save”. If everything went well you should see a message at the bottom of the screen telling you "API Registered Successfully".
To test that the API Proxy is working correctly, click “GET” for one of the resources with that operation available. If no resources are available, select “Test” from the main Drop-Down menu, then select the name of the API Proxy from the list of APIs.
Click “Authentication: None” and select “Basic Authentication” from the drop down list.
User Name: <UserID added under GW_User in Part I>
Password: <Password for userID added under GW_User in Part I>
Select the “GET” operation.
Click “Send” in the bottom right hand corner.
This should successfully retrieve the OData Collection provided by HCIODP, via a call to the SAP API Management API Proxy Endpoint URL.
Now that you have an API Proxy sitting on top of the HCIODP service, you can start adding Policies to extend the functionality, as well as expose it to the Developer Portal so that Developers could begin to build apps on top of the data. I will not get into that in this blog, as we were just quickly getting up and running connecting SAP API Management to HCIODP.
If you would like to start learning more about what you can do with SAP API Management, I suggest looking at the repository of information SAP API Management - Overview & Getting started which will continue to be updated as more enablement content is added.
Of particular interest to this particular exercise, will be the Blog on creating a Policy SAP API Management - Enabling URL masking . If you notice above, the returned data includes links to the HCI OData Provisioning service, which is not what you want if SAP API Management is the intended target for connectivity. The linked blog will tell you how to have SAP API Management automatically mask all URLs through SAP API Management.
For questions, feedback, concerns, feel free to leave a comment, or send us an E-Mail.
Introduction
As SAP customers, partners and consultants embark on their journey to designing and building your own Fiori style applications a major part of this journey will be building your own RESTful API’s using SAP Gateway ( OData services ).
I'm going to break down my learning and insights into SAP Gateway development into a series of blogs where I would like to cover the following topics:
Acknowledgement and Use Cases
The development patterns covered in this series are not originally mine. If you look at the My Accounts Fiori application delivered by SAP in CRM (UICRM001), this pattern was written to support the OData service "CRM_BUPA_ODATA".
Our colleagues who have put together this pattern probably don't realise how much of a gem it has been for me to rapidly produce re-usable, nicely encapsulated entity implementations in SAP Gateway, so hats off to the folk who put a lot of though and effort behind this.
You can consider this pattern when creating your own OData services in the customer namespace. For modifying SAP delivered OData services you should always follow the SAP recommended approach and enhancement documentation.
Key Learnings from this article:
Putting the Pattern Together
First let’s cover our typical OData service from a high level and use a simple entity relationship model as an example ( Account or Business Partner with an Address association ):
SEGW - Design Time.
Here we define our entities, attributes of those entities and association between them ( navigation path ).
In our example we want to look at two entities, Account and Address. You can generate the entity with various methods but I like to generate new entities from a custom ABAP structure.
DPC, DPC_EXT - Data Provider Classes
These are the classes that are generate from activating the OData service and where you have been told to implement you entity logic for CRUD functions.
I have a couple of rules around DPC_EXT:
Entity Implementation and Class Hierarchy
Most implementations I have seen have a loose encapsulation concept, in other words implementation in CRUD methods in the DPC_EXT class contain some business logic and other business logic is contained somewhere else.
I don’t like this as it becomes difficult to maintain and causes regression problems when new developers take over and add new features.
Instead we can use a nice class hierarchy that helps us encapsulate different business logic in layers.
This is the current hierarchy I like:
In context with our “Account” entity, this is what we would end up with:
ZCL_ODATA_RT | This class provides a base line search for GET_ENTITYSET that handles skip / count / paging. The base line search is based on a dyanmic SQL approach. it also allows us to do eTag calculation across all out entities. Apart from the Search pattern in this class, I only use it for Gateway framework type functions. |
ZCL_ODATA_BP | I’ve called this a modular layer, we can encapsulate module specific functions. In our use case for Accounts (Business Partner) we may have common functions such as add leading zeros to a BP number( internal / external display ) or functions that format the Business Partners full name that we can write once and call across any concrete class that belongs to our module. |
ZC_ODATA_ACCOUNT | This is the concrete class, here you implement the CRUD functions GET_ENTITYSET, GET_ENTITY, UPDATE_ENTITY etc which are called from the corresponding methods in the actual DPC_EXT class of your service. All entity specific business logic is maintained here. |
Re-use of Entity Objects
Start thinking about the services you need to build. Are there re-usable entities across these services? Let’s take a look at a one example.
Below is a diagram that shows two OData services:
These two services share common entities:
In some implementations, I have seen the same code duplicated across different DPC_EXT classes for the same entity which doesn't lend itself to good re-use patterns although there certainly may be a use case where this is necessary.
Here is what I mean about acceleration, once I have the Account and Address entity implementation up and running, tested and stable I can re-use these entities across new services I'm putting together.
The initial build is obviously the longest but scaffolding up a new service with the same entities then becomes considerably faster.
Data Provider Class ( DPC_EXT )
To facilitate this pattern, we need to make some changes in our DPC_EXT. This allows us to access instantiated concrete classes at runtime.
First we need an attribute on our DPC_EXT class that holds the instances of each entity implementation:
The structure of the table is:
Now to instantiate our entity implementations, during the constructor call in the DPC_EXT class we instantiate each entity and store the instance, class name and entity name:
Now we need to call our entity concrete class implementation. Here we assume the service has been generated and you have re-defined the relevant method.
ZCL_ODATA_RT
We've mentioned this class called ZCL_ODATA_RT. The purpose of this class is to provide:
Overview of Entity Search
The first thing we probably want to do in the Fiori application is search for an entity. I usually start by building out the generic search pattern in ZCL_ODATA_RT that follows this basic flow:
NB: I don't have to use this when setting up a new concrete class for the first time, I can redefine the GET_ENTITYSET method in the ZCL_ODATA_ACCOUNT and write other code to do the search for us.
Here is an overview of the logic of the GET_ENTITYSET method in ZCL_ODATA_RT:
1) Process paging, max hits and skip tokens. This allows our dynamic SQL to retrieve our results based on certain filter parameters and then we have a generic paging pattern taken care for us. Think about the Master Object list in a Fiori app where you type in some search criteria and the OData call includes a "substring" filter.
2) Generate a dynamic SQL statement. This method contains the select, inner join, where statements.
3) Apply additional filters. Here I can read my entity filters passed by $filter and add additional criteria to our dynamic SQL
4) Execute the dynamic SQL statement and page the results
5) Enrich the data and return the result. This is where where you want to populate additional attributes of the entity that could not be retrieved during the dynamic SQL statement, thing such as texts or formatted names for example.
Implementing the search in our concrete class
I now need to implement this in our concrete class. When I have created our ZCL_ODATA_ACCOUNT class, I can then redefine the appropriate methods where I need to put my logic, this includes:
Generate Select
In our generate select method, all we are doing is forming up the dynamic SQL, what attributes we want to select into our entity set. We can also include joins here if we want to search across multiple tables.
Enrich Entity Set Data
Before we pass the selected entity set records back to the odata framework we want to format and add additional things into the result. Stuff like full name or texts. In our example we've just formatted the full name of the business partner into the field FULL_NAME.
We now have a concrete class implementation with an entity set search routine.
Implementing Other CRUD methods
We can then continue to implement the other CRUD functions by redefining the appropriate methods in our concrete class (ZCL_ODATA_ACCOUNT) and data provider class.
Let's do an GET_ENTITY example. Here is the re-definition in the DPC_EXT class:
And let's put some logic in the ZCL_ODATA_ACCOUNT method GET_ENTITY:
I won't go through POST, DELETE, UPDATE this should provide enough foundation for how the pattern works and encapsulates our business logic nicely into a concrete object.
Summary
In this part of the series we've demonstrated that it is possible to build a re-use pattern that encapsulates our entity implementations cleanly and also include a powerful search feature that we can consume on demand if required without having to re-write the same functions more than once.
I hope you have found this blog useful and see you next time where when we cover some performance considerations for your OData services and e-tag handling for data concurrency control.
Stay tuned for updates as I prepare a set of video walkthroughs to augment the content outlined in part 1 of this series and to show more complex search patterns.
Thanks for stopping by.
Did you know that the method parameter IV_SOURCE_NAME of GET_ENTITYSET has different values depending on whether the service is triggered with or without $expand? This also applies to the parameter IT_NAVIGATION_PATH. I recently found out about this the hard way because not knowing this caused a bug in my code. I think it’s worth sharing my findings. Basically, you will see that using $expand fills the parameters mentioned above differently compared to directly triggering GET_ENTITYSETwithout $expand but still with Navigation Properties.
First of all, I’m using NW 7.4 SP12 together with SAP_GWFND SP12 (and some additional patches/notes).
We’re going to use the GWSAMPLE_BASIC project/service that is shipped with your GW installation. You can simply go to the Service Builder (transaction code SEGW) and open the project /IWBEP/GWSAMPLE_BASIC. I like this project because I could see how SAP implements GW services – and I learned a lot from that in the past :-) Also, check Demo service ZGWSAMPLE_SRV | SCN by Andre Fischer which tells you a little more about the data model of the Gateway Sample Project.
Now let’s first set a breakpoint in the method CONTACTSET_GET_ENTITYSET of the class /IWBEP/CL_GWSAMPLE_BAS_DPC_EXT. You can navigate there directly from SEGW (see screenshot above), or just use SE80.
Now, after we have set the breakpoint, let’s try some example URLs and see what happens.
1. Without $expand: /sap/opu/odata/IWBEP/GWSAMPLE_BASIC/SalesOrderSet('0500000000')/ToBusinessPartner/ToContacts
As you can see this URL is using two Navigation Properties. First, we navigate to the BusinessPartner of a given SalesOrder, then to the BusinessPartner’s contacts. This will only return the contact data to the caller of the service. Calling this URL will hit the breakpoint we set above:
While we are in the debugger inside CONTACTSET_GET_ENTITYSET let’s check the two mentioned parameters:
This basically tells us from the source "SalesOrder", we first navigated to the BusinessPartner and then to the contacts. The itab IT_NAVIGATION_PATH also contains information about the Target_Type (=Entity) and the corresponding namespace. So far so good. Now let’s assume we want a given SalesOrder and the contacts of its BusinessPartner in one request. We will achieve this using $expand.
2. With $expand: /sap/opu/odata/IWBEP/GWSAMPLE_BASIC/SalesOrderSet('0500000000')?$expand=ToBusinessPartner/ToContacts
Again, hitting this URL will hit our breakpoint. However, this time both IV_SOURCE_NAME and IT_NAVIGATION_PATH have different values (see screenshots below):
This basically tells us from the source "BusinessPartner" we navigated to the contacts. However, this is not what I expected. I expected that using $expand in this scenario would be equal to calling our first URL from above in Step 1 (/sap/opu/odata/IWBEP/GWSAMPLE_BASIC/SalesOrderSet('0500000000')/ToBusinessPartner/ToContacts).
Instead, when using $expand in our scenario it’s basically equal to calling the URL /sap/opu/odata/IWBEP/GWSAMPLE_BASIC/BusinessPartnerSet('0100000000')/ToContacts, the explanation comes next.
3. Behind the scenes: /sap/opu/odata/IWBEP/GWSAMPLE_BASIC/BusinessPartnerSet('0100000000')/ToContacts
So, when using $expand, we now know what is actually called behind the scenes by the Gateway. Well, let’s prove this in the debugger by calling /sap/opu/odata/IWBEP/GWSAMPLE_BASIC/BusinessPartnerSet('0100000000')/ToContacts directly:
As you can see the result is equal to what we had in Step 2 – and that’s the prove:
However, this time for some reason the TARGET_TYPE_NAMESPACE is filled correctly – which I can’t explain :-)
I’m sure there is a good reason for the different values passed to the parameters – but I don’t know that reason :-)
Anyway… I’m aware of this now and I hope you are as well.
Introduction
Part 1 of this series can be found here if you have not seen it yet:
In Part 1 we discussed development patterns in SAP Gateway and how we can achieve re-use and business logic encapsulation.
Two concepts we covered were a “search pattern” where we used Dynamic SQL to retrieve entity sets and a class hierarchy to encapsulate business logic and separate out functions on a module and gateway level.
In this blog I’d like to clarify and enhance the discussion by covering different options for Search and how to do Service Inclusion for re-use purposes.
In Part 1 we covered a possible class hierarchy where we could gain re-use and business logic encapsulation across our OData services and also included SAP module specific functions.
We also covered a generic search pattern that uses Dynamic SQL, that is ABAP code where you build up your own query and execute the SQL statements by manipulating “From” “Where” clauses.
The dynamic SQL doesn't sit well with a lot of people and here a few reasons why:
This is an example I put together using Reporting Framework in CRM. We created a new class "ZCL_ODATA_QRY_RF" which designates this class is coupled with the "Reporting Framework". It still inherits from our ZCL_CRM_ODATA_RT class.
We have a simple constructor that takes in a CRM BOL query and object type ( like a Business Activity BUS2000126 ) and instantiates a instance of of the IO_SEARCH class provided by SAP.
And of course a "Search" method that takes in some query parameters, the IS_PAGING structure from our OData service and another flag that allows to return only the GUID's as a result rather than all the functional data.
This is our search method implementation:
METHOD search. DATA: lt_return TYPE bapiret2_tab, ls_message_key TYPE scx_t100key, lv_max_hits TYPE i. FIELD-SYMBOLS: <result_tab> TYPE STANDARD TABLE. IF it_query_parameters IS INITIAL. RETURN. ENDIF. CALL METHOD gr_search->set_selection_parameters EXPORTING iv_obj_il = gv_query iv_obj_type = gv_object_type it_selection_parameters = it_query_parameters IMPORTING et_return = lt_return EXCEPTIONS partner_fct_error = 1 object_type_not_found = 2 multi_value_not_supported = 3 OTHERS = 4. READ TABLE lt_return ASSIGNING FIELD-SYMBOL(<fs_message>) WITH KEY type = 'E'. IF sy-subrc = 0. ls_message_key-msgid = 'CLASS'. ls_message_key-msgno = 000. RAISE EXCEPTION TYPE /iwbep/cx_mgw_busi_exception EXPORTING textid = ls_message_key. ENDIF. IF iv_keys_only = abap_true. CALL METHOD gr_search->get_result_guids EXPORTING iv_max_hits = lv_max_hits IMPORTING et_guid_list = rt_guids et_return = lt_return. READ TABLE lt_return ASSIGNING <fs_message> WITH KEY type = 'E'. IF sy-subrc = 0. ls_message_key-msgid = 'CLASS'. ls_message_key-msgno = 000. RAISE EXCEPTION TYPE /iwbep/cx_mgw_busi_exception EXPORTING textid = ls_message_key. ENDIF. ELSE. CALL METHOD gr_search->get_result_values EXPORTING iv_max_hits = lv_max_hits IMPORTING et_results = rt_results et_guid_list = rt_guids et_return = lt_return. READ TABLE lt_return ASSIGNING <fs_message> WITH KEY type = 'E'. IF sy-subrc = 0. ls_message_key-msgid = 'CLASS'. ls_message_key-msgno = 000. RAISE EXCEPTION TYPE /iwbep/cx_mgw_busi_exception EXPORTING textid = ls_message_key. ENDIF. ENDIF. ********************************************************************** * Process Top / Skip tokens for paginglts ********************************************************************** IF is_paging-skip > 0. DELETE rt_results FROM 1 TO is_paging-skip. DELETE rt_guids FROM 1 TO is_paging-skip. ENDIF. IF is_paging-top > 0. DELETE rt_results FROM ( is_paging-top + 1 ). DELETE rt_guids FROM ( is_paging-top + 1 ). ENDIF. ENDMETHOD.
So now when you want to execute the search in your concrete class, you can consume the SEARCH method in the inherited Search Framework plugin you've created:
METHOD /iwbep/if_mgw_appl_srv_runtime~get_entityset. DATA: lt_query_parameters TYPE genilt_selection_parameter_tab, ls_query_parameter LIKE LINE OF lt_query_parameters, lt_sort TYPE abap_sortorder_tab, ls_sort TYPE abap_sortorder. FIELD-SYMBOLS: <fs_results> TYPE STANDARD TABLE. CREATE DATA er_entityset TYPE TABLE OF (gv_result_structure). ASSIGN er_entityset->* TO <fs_results>. ********************************************************************** * Navigation Path from an Account ********************************************************************** READ TABLE it_key_tab ASSIGNING FIELD-SYMBOL(<fs_key>) WITH KEY name = 'AccountId'. IF sy-subrc = 0. ls_query_parameter-attr_name = 'ACTIVITY_PARTNER'. ls_query_parameter-sign = 'I'. ls_query_parameter-option = 'EQ'. MOVE <fs_key>-value TO ls_query_parameter-low. APPEND ls_query_parameter TO lt_query_parameters. ENDIF. ********************************************************************** * Process Filters ********************************************************************** LOOP AT it_filter_select_options ASSIGNING FIELD-SYMBOL(<fs_filter_select_option>). CASE <fs_filter_select_option>-property. WHEN 'ProcessType'. LOOP AT <fs_filter_select_option>-select_options ASSIGNING FIELD-SYMBOL(<fs_select_option>). MOVE-CORRESPONDING <fs_select_option> TO ls_query_parameter. ls_query_parameter-attr_name = 'PROCESS_TYPE'. APPEND ls_query_parameter TO lt_query_parameters. ENDLOOP. WHEN OTHERS... .....<DO SOME MORE STUFF HERE TO HANDLE FILTERS>.... ENDCASE. ENDLOOP. CALL METHOD search EXPORTING it_query_parameters = lt_query_parameters iv_keys_only = abap_false is_paging = is_paging IMPORTING rt_results = <fs_results>. ....COPY THE RESULTS TO THE EXPORT TABLE ETC....
Whilst this is a brief example, it shows the possibility of plug and play type frameworks for your OData services rather than tackling dynamic SQL that was included in the first part of this blog series.
If you've implemented a similar pattern I would love to hear from you with details about what you have put together..
Service Inclusion is the process of including another Service Model in your SAP Gateway Service Builder project, like this:
Effectively, it allows you to access the Entities in service B directly from service A:
This approach maximises the re-use of your services, not only will you get re-use and business logic encapsulation in the class hierarchy, your design time re-use is maximised as well.
There are a couple of things to watch out for.
Referring to the diagram above, when you execute the URI for “/SALES_ORDER_SRV/Accounts(‘key’)/SalesOrders” the navigation path is not populated, a Filter parameter is passed to the SalesOrder GET_ENTITYSET where you now must filter the Sales Orders by the Account ID.
What I mean by “limitation” is that usually you will be passed a navigation path ( IT_NAVIGATION_PATH) where you can assess where the navigation came from and what the keys were, in this use case you are missing the KEY_TAB values in the IT_NAVIGATION_PATH importing table in your GET_ENTITYSET method.
For this to work you must also set the referential constraint in the SEGW project and build your Associations as an external reference, like this:
When the SAP Gateway framework attempts to assess which model reference your entity belongs to so it can execute a CRUD function, the underlying code loops over the collection of model references you have included ( in Service B ) and tries to find the first available model where your external navigation entity is located.
In case you have implemented the same entity in multiple included services, SAP picks up the first available which can lead to surprises if you're wondering why your debug point is not being triggered during a GET_ENTITYSET method call in the wrong service
If you haven't used this feature yet it provides a really great way to maximise re-use of your OData services, just a side note here; a really good use for this pattern is your common configuration entity model, things such as Country and State codes, Status Key Value pairs etc,
I have built a common service before that reads all of the configuration we use across different Fiori applications and are contained in one common service.
This way i simply "include" the common service so i don't have to keep implementing the same entity set or function import in different services.
I have put together this blog to try and clarify and provide a different perspective on Search capability within our OData services. I know there are some of us out there that dislike the dynamic SQL scenario and for good reason.
My aim is to encourage some thought leadership in this space, so those customers tackling their OData service development can at least learn from our experience and embrace re-use patterns to really try and reduce TCO not to mention accelerate UX development in parallel.
As always, I'd love to hear from our community about other development patterns you've come up with so please share your thoughts and learnings, it's really important as our customers start to ramp up their OData and Fiori development.
While working with Multi Origin in service URL we have to pass additional parameter SAP_Origin (i.e. nothing but your system alias pointing to corresponding ECC) in filter and Get entity operation.
When working on same scenarios i found one interesting possibility of passing the System ID (SID) and Client (of ECC) and fetch data from particular system.
What is Multi Origin?
Multiple origin is used to fetch data from different back-end systems, collect them in one single service and updating different back-end systems using the same user.
Why System ID and Client?
Pre-Requisite :
Alias eg.
FIORI_ECC1 : Backend System Alias 1 (SID - ABC, Client -100)
FIORI_ECC2 : Backend System Alias 2 (SID - XYZ, Client -100)
How to Use System ID and Client in URL ?
Now we are all set to test the scenario of using SID and Client in Gateway Service URl
Syntax for using SID and Client:
/sap/opu/odata/sap/ZPROJECT_SRV;o=sid(SID.Client)/EnitySet
In our case it will be:- /sap/opu/odata/sap/ZPROJECT_SRV;o=sid(ABC.100)/EnitySet
See the result which refrains output results only from system ABC-100.
Syntax for using System Alias:
/sap/opu/odata/sap/ZPROJECT_SRV;o=SystemAlias/EnitySet
or /sap/opu/odata/sap/ZPROJECT_SRV;mo/EnitySet?$filter=SAP_Origin eq 'SystemAlias'
In our case it will be:- /sap/opu/odata/sap/ZPROJECT_SRV;o=FIORI_ECC1/EnitySet
See the result which refrains output results only from system alias FIORI_ECC1.
Similarly it works for create and deep create operation also.
Below I am showing an example working with create deep entity.
The processing of request is as follows:
1. The SAP NetWeaver Gateway system searches for all existing system aliases for the user and the specified service.
2. The SAP NetWeaver Gateway system checks if one from above system aliases equals sid(ABC.100). If this is the case, this system will be used.
3. If no such system exists underneath the specified service, then SAP NetWeaver Gateway checks whether one of the above system aliases has defined a system ID ABCand client 100.
4. If this is the case, this system ID will be used. Otherwise an error message is displayed.
Hope this fills new developments.
It seems to me that the current orthodoxy is that an OData service should be developed for each UI5 app. In other words, each app should only have to call one service. I think we should question whether that is the best approach.
Of course, a UI5 app can use many OData models, so long as each one has a unique name within the app.
In my opinion, when developing OData services (e.g. using Netweaver Gateway or HCI OData Provisioning), we should think of the big picture. We shouldn't be too focused on the bespoke Fiori-style app we happen to be developing that day. In a few years time we may have a large number of Fiori apps, both standard and bespoke. There may be many other consumers of our OData services both internal (e.g. our own company website or intranet) and external (e.g. customer or supplier systems).
Why not take an 'API' approach, and think of our OData services as a way of interacting with the entities in our back end system? Why not organise the services in such a way as to make it easy to navigate for any UI developer working with any technology? For example we could have a service for customers, one for inventory and one for employees. It seems to me that this would be much more in line with the RESTful architecture. Just because we are using an SAP technology, such as Gateway, to deliver our services, it doesn't mean we should only consider SAP UI-technologies as consumers. A big plus of the modern web-architecture is that the UI-layer is completely de-coupled from the data-provider and I think we should take advantage of that.
If we (SAP and customers) carry on developing new services at a rapid rate, within a few years I fear we will have a large number of overlapping services. It will be so hard to find one to use that developers will simply create another brand new service rather than trying to sort through that long list. There will be much duplication of logic.
Each published service represents a responsibility (does it work as it should?) and a vulnerability (could it facilitate malicious activity?). This isn't new (the same can be said for a remote-enabled function module) but surely a very large number of services makes it harder to manage these risks.
This is my take on the pros and cons of the one-service-per app model:
So, is there an obvious point that I am missing? Are there advantages of the one-service-per-app approach that I have missed? Am I wrong to call one-service-per-app the orthodoxy?
What is the strategy in your organisation (or at your clients) for OData services? I look forward to your comments.
step1) Open TCode SEGW and create a project as shown in below screen shot.
Provide the following details
Then the components like
Gets displayed automatically.
step2. Create an entity type as follows
Provide as following.
Click on properties as shown below.
Add the following values for header as shown below
Same way create entity type PR_Item for the item also and give the following values
step3. Create an entityset as shown below.
Give the following details
Then Header Entityset is created.
Same way create for Item Entityset is created.
step 4. Create a association as shown below.
And association set is automatically created.
step 5. Now Navigation is automatically created.
step 6.After completion of data model in Odata service in Service Implementation is filled automatically as shown in below screen shot.
step 7.Now we need to generate runtime artifacts ,for that you need to select Runtime Artifacts and click on
Click OK and save it.
We get the following in Runtime Artifacts .
step 8. We need write in ZCL_Z_PURCHASE_REQUISI_DPC_EXT so double click on it .
step 9 A.Then we need to right click on the methods and redefine required methods in following process.
And write following code to get entity .
Code for Get Entity
method PRHEADERCOLLECTI_GET_ENTITY.
DATA: LS_KEY_TAB LIKE LINE OF IT_KEY_TAB.
READ TABLE it_key_tab INTO ls_key_tab WITH KEY name ='PRNumber'.
ER_ENTITY-prnumber = LS_KEY_TAB-VALUE.
* lv_pR_item = ls_key_tab-value.
**TRY.
*CALL METHOD SUPER->PRHEADERCOLLECTI_GET_ENTITY
* EXPORTING
* IV_ENTITY_NAME =
* IV_ENTITY_SET_NAME =
* IV_SOURCE_NAME =
* IT_KEY_TAB =
** io_request_object =
** io_tech_request_context =
* IT_NAVIGATION_PATH =
** IMPORTING
** er_entity =
** es_response_context =
* .
** CATCH /iwbep/cx_mgw_busi_exception .
** CATCH /iwbep/cx_mgw_tech_exception .
**ENDTRY.
endmethod.
9 B. Likewise redefine other required methods PRITEMCOLLECTION_GET_ENTITYSET and deep insert also in same way .
Code for GET_ENTITYSET
*** inactive new ***
METHOD PRITEMCOLLECTION_GET_ENTITYSET.
DATA: ls_key_tab TYPE /iwbep/s_mgw_name_value_pair,
lv_pr_number TYPE BANFN,
lv_pr_item TYPE BNFPO,
lt_pr_items_bapi TYPE TABLE OF BAPIEBAN,
ls_pr_item_bapi TYPE BAPIEBAN,
IT_RETURN TYPE STANDARD TABLE OF BAPIRETURN.
TYPES:
BEGIN OF ts_pr_item,
PRITEM type C length 5,
PURGROUP type C length 3,
MATERIAL type C length 18,
SHORTTEXT type C length 40,
PLANT type C length 4,
MATERIALGROUP type C length 9,
QUANTITY type P length 7 decimals 3,
UNIT type C length 3,
DOCUMENTTYPE type C length 4,
DELIVERYDATE type TIMESTAMP,
ITEMCATEGORY type C length 1,
ACCTASSIGNCATEGORY type C length 1,
PRNUMBER type C length 10,
END OF ts_pr_item.
*
DATA: es_entityset LIKE LINE OF ET_ENTITYSET.
*
READ TABLE it_key_tab INTO ls_key_tab WITH KEY name ='PRNumber'.
lv_pR_number = ls_key_tab-value.
*
READ TABLE it_key_tab INTO ls_key_tab WITH KEY name ='PRItem'.
lv_pR_item = ls_key_tab-value.
CALL FUNCTION 'BAPI_REQUISITION_GETDETAIL'
EXPORTING
number = lv_pr_number
tables
requisition_items = lt_pr_items_bapi
RETURN = IT_RETURN
.
*DELETE lt_pr_items_bapi WHERE preq_item NE lv_pr_item.
*
READ TABLE lt_pr_items_bapi INTO ls_pr_item_bapi INDEX 1.
*
LOOP AT lt_pr_items_bapi INTO ls_pr_item_bapi.
*
es_entityset-PRITEM = ls_pr_item_bapi-PReq_ITEM.
es_entityset-PURGROUP = ls_pr_item_bapi-PUR_GROUP.
es_entityset-material = ls_pr_item_bapi-material.
es_entityset-SHORTTEXT = ls_pr_item_bapi-short_text.
es_entityset-plant = ls_pr_item_bapi-plant.
es_entityset-MATERIALGROUP = ls_pr_item_bapi-mat_grp.
es_entityset-quantity = ls_pr_item_bapi-quantity .
es_entityset-UNIT = ls_pr_item_bapi-unit .
es_entityset-DOCUMENTTYPE = ls_pr_item_bapi-doc_type .
es_entityset-ITEMCATEGORY = ls_pr_item_bapi-item_cat .
es_entityset-ACCTASSIGNCATEGORY = ls_pr_item_bapi-acctasscat .
es_entityset-PRNUMBER = ls_pr_item_bapi-preq_no .
es_entityset-DeliveryDate = ls_pr_item_bapi-deliv_date .
APPEND es_entityset TO et_entityset.
CLEAR: es_entityset.
*
ENDLOOP.
CODE FOR CREATE_DEEP_ENTITY
method /IWBEP/IF_MGW_APPL_SRV_RUNTIME~CREATE_DEEP_ENTITY.
**TRY.
*CALL METHOD SUPER->/IWBEP/IF_MGW_APPL_SRV_RUNTIME~CREATE_DEEP_ENTITY
* EXPORTING
** iv_entity_name =
** iv_entity_set_name =
** iv_source_name =
* IO_DATA_PROVIDER =
** it_key_tab =
** it_navigation_path =
* IO_EXPAND =
** io_tech_request_context =
** IMPORTING
** er_deep_entity =
* .
** CATCH /iwbep/cx_mgw_busi_exception .
** CATCH /iwbep/cx_mgw_tech_exception .
**ENDTRY.
DATA: lv_new_pr_no TYPE BAPIEBANC-PREQ_NO.
* ls_new_pr_header TYPE BAPIMEREQHEADER.
DATA: ls_bapi_item TYPE bapiebanc,
lt_bapi_item TYPE TABLE OF bapiebanc,
lt_return TYPE TABLE OF bapiret2.
TYPES: ty_t_pr_items TYPE TABLE OF zcl_z_purchase_requisi_mpc=>ts_pr_item WITH DEFAULT KEY.
TYPES: BEGIN OF ts_pr_items.
INCLUDE TYPE zcl_z_purchase_requisi_mpc=>ts_pr_header.
TYPES: PrItemCollection TYPE ty_t_pr_items,
END OF ts_pr_items.
DATA: lt_items TYPE zcl_z_purchase_requisi_mpc=>tt_pr_header,
ls_item TYPE zcl_z_purchase_requisi_mpc=>ts_pr_item,
lt1_items type ty_t_pr_items,
ls_pritems TYPE ts_pr_items.
DATA: ls_data TYPE ts_pr_items.
CALL METHOD io_data_provider->read_entry_data( IMPORTING es_data = ls_data ).
* ls_item-PRItemCollection = ls_data-PrItemCollection.
lt1_items = ls_data-PrItemCollection.
*append ls_item to it_items.
*clear ls_item.
* LOOP AT lt_items INTO ls_item.
LOOP AT lt1_items INTO ls_item.
ls_bapi_item-material = ls_item-material.
ls_bapi_item-plant = ls_item-plant.
ls_bapi_item-quantity = ls_item-quantity.
ls_bapi_item-doc_type = ls_item-DocumentType.
ls_bapi_item-DELIv_DATE = ls_item-DeliveryDate.
ls_bapi_item-PUR_GROUP = ls_item-PURGROUP.
ls_bapi_item-PREQ_ITEM = ls_item-PRITEM.
ls_bapi_item-SHORT_TEXT = ls_item-SHORTTEXT.
ls_bapi_item-MAT_GRP = ls_item-MATERIALGROUP.
ls_bapi_item-UNIT = ls_item-UNIT.
ls_bapi_item-ITEM_CAT = ls_item-ITEMCATEGORY.
ls_bapi_item-ACCTASSCAT = ls_item-ACCTASSIGNCATEGORY.
ls_bapi_item-PREQ_NO = ls_item-PRNUMBER.
* ls_itemx-po_item = ls_item-item.
APPEND ls_bapi_item TO lt_bapi_item.
* APPEND ls_itemx TO lt_itemx.
CLEAR ls_item.
ENDLOOP.
CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'
EXPORTING
INPUT = lv_new_pr_no
IMPORTING
OUTPUT = lv_new_pr_no
.
CALL FUNCTION 'BAPI_REQUISITION_CREATE'
* EXPORTING
* SKIP_ITEMS_WITH_ERROR =
* AUTOMATIC_SOURCE = 'X'
IMPORTING
NUMBER = lv_new_pr_no
TABLES
requisition_items = lt_bapi_item
RETURN = lt_return
.
CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'
EXPORTING
wait = 'X'.
* MOVE-CORRESPONDING ls_headerdata TO ls_pritems.
ls_pritems-prnumber = lv_new_pr_no.
copy_data_to_ref(
EXPORTING
is_data = ls_pritems
CHANGING
cr_data = er_deep_entity ).
endmethod.
step10: Now Service Maintenance is automatically created but we need to register the service. So select the system i.e. EC7 and click on register.
Give the system Alias LOCAL_GW and click OK. Then Maintain The register
step 11.Test the service
Provide the following query
/sap/opu/odata/SAP/Z_PURCHASE_REQUISITION_TEST_SRV/PRHeaderCollection('0010003245')?$expand=PRItemcollection
Click on Use as Request.
Then you will get the response as request on left side
For Purchase Requisition creation you need to remove the Purchase Requisition number from left side.
/sap/opu/odata/SAP/Z_PURCHASE_REQUISITION_TEST_SRV/PRHeaderCollection() in gateway client and click on post
Purchase Requisition is created ‘0010017270’ as shown in below screen shot.
Check in Table level entries in EBAN we can find the Purchase Requisition '0010017270'
Probably everyone understands the need to perform load testing of applications and services. Usually it comes down to how much time and money one can spend in doing that. This blog gives a quick and free approach. Although not very advanced, it can be used to simulate load easily.
A fairly recent version of cURL is required. If you plan on load testing with HTTPS, your cURL needs to support HTTPS. The certificate of the accessed server needs to be trusted as well. With cuRL the latter can be a bit tricky depending on how cURL was compiled. In case cURL doesn’t have in-built support for certificates, you can retrieve the CAs from the cURL website and convert them to CRT format using the provided mk-ca-bundle. Name the output file curl-ca-bundle.crt and place in the cURL directory.
This blog was written while using Windows as a platform, it should be straight forward to map the used commands to your choice of Operating System, such as Linux.
First you need to solve how to get the CSRF token and use it for all POST requests. The solution is to invoke cURL with –i to retrieve the header variables and –D and store them so that you can parse them, use
curl –s –i –H "Accept: application/json" -H "X-CSRF-Token: fetch" -D headers.txt -u user:password https://myurl/mylistservice
To parse the header variables use
findstr "x-csrf-token" headers.txt > token.txt
and
set /p TOKEN=<token.txt
to store the CSRF token in an environment variable called TOKEN. Now to use the CSRF token in POST requests, use –H “%TOKEN%” in addition to any other header variables you may want to set, e.g.
curl –s –H "Content-Type: application/json" -H "Accept: application/json" -H "%TOKEN%" –u user:password -X POST --data-binary @payload.txt https://myurl/mycreateservice
The payload (in JSON format) is stored in a text file called payload.txt.
If you followed instructions in this blog to the letter, you probably still couldn’t make it work. Any attempt to use POST would fail with a token validation error, even though you sent the CSRF token in the request. The most likely reason for this is that Gateway is generating a new token for each request, invalidating the previous one. The root cause is that Gateway doesn’t recognize the session. It’s not sufficient to pass the CSRF token together with Basic Authentication; you need to also pass the identifying session information. The session information is stored in Cookies. With cURL you’ll have to use –c parameter to store the sent Cookies and –b to send them back. In other words you would add –c cookies.txt –b cookies.txt to each cURL command. The commands used in our examples would become
curl –s –i –c cookies.txt –b cookies.txt –H "Accept: application/json" -H "X-CSRF-Token: fetch" -D headers.txt -u user:password https://myurl/mylistservice
curl –s –c cookies.txt –b cookies.txt –H "Content-Type: application/json" -H "Accept: application/json" -H "%TOKEN%" –u user:password -X POST --data-binary @payload.txt https://myurl/mycreateservice
Now that you have your elementary script ready, you might ask yourself “how can I simulate load”, e.g. have multiple instances running. The quick solution is to run the script in as many directories as you want parallel instances. A more clever approach is to have cURL create and use files in one directory with a prefix/suffix tying it to a specific instance. I myself did the latter by using the parent process ID (of Command Prompt) as identifier, that way you can start as many scripts from the same directory as you want and as your systems can handle. To give you an idea how that would look, here is how the 2 commands in our example look in my script
curl -s -i -c _cookies_"%PPID%".txt -b _cookies_"%PPID%".txt -H "Accept: application/json" -H "X-CSRF-Token: fetch" -D _headers_"%PPID%".txt -u "%USR%":"%PWD%" https://myurl/mylistservice
curl -s -c _cookies_"%PPID%".txt -b _cookies_"%PPID%".txt -H "Content-Type: application/json" -H "Accept: application/json" -H "%TOKEN%" -u "%USR%":"%PWD%" -X POST --data-binary @payload.txt https://myurl/createservice
PPID environment variable stores the process ID of the parent process. Obviously user and password are stored in environment variables USR and PWD defined in the script. To use multiple users, you could either use a rotating (or random) list or use the multiple directory approach.
If you want that your script waits between individual commands (or loop iterations), you can use
SET /A SLEEP=%RANDOM% * 10 / 32768 + 5
timeout /t %SLEEP% /nobreak
to set wait time of 5-15 seconds. If you plan to implement the parent process ID approach you’ll soon find out that it’s surprisingly difficult to retrieve the process ID of the parent process, in Windows that is. I ended up writing a C# program and calling it from the script.
While the approach discussed in this blog is elementary, it does the job. While commercial alternatives provide more advanced and realistic load testing they usually come with a significant price tag, especially if you need to simulate tens, hundreds or even thousands of clients.
Issue:
In integration between GW and hana view, hana view will provide input parameter as filter for one field, normally the input parameter accept one value. Issue is how to handle the filter has multiple values. For example, we need to count customer age distribution based on multiple branches customer belongs to.
Solution:
1. AMDP:
Define/implement method in AMDP class.
agedistribution
IMPORTING
VALUE(branch) TYPE string
EXPORTING
VALUE(et_agedistribution) TYPE tt_output_agedistribution,
et_agedistribution = SELECT "AGE", "AGERATE"
FROM "_SYS_BIC"."ZOUI5_CV_CUSTOMERAGE_SQL"
(PLACEHOLDER."$$ZBRANCH$$"=> :branch)
GROUP BY "AGE", "AGERATE", "ZCOUNT";
2. ABAP Class:
Combine filters and call AMDP method
DATA lt_partner TYPE RANGE OF bu_partner.
DATA ls_partner LIKE LINE OF lt_partner.
DATA: o_cond TYPE REF TO cl_lib_seltab,
h_handle TYPE REF TO cl_abap_tabledescr,
branch_cond TYPE string.
ls_partner-sign = 'I'.
ls_partner-option = 'EQ'.
ls_partner-low = '1000000001'.
APPEND ls_partner TO lt_partner.
ls_partner-sign = 'I'.
ls_partner-option = 'EQ'.
ls_partner-low = '1000000002'.
APPEND ls_partner TO lt_partner.
CALL METHOD cl_lib_seltab=>new
EXPORTING
it_sel = lt_partner
RECEIVING
rr_ref = o_cond.
CALL METHOD o_cond->sql_where_condition
EXPORTING
iv_field = 'BRANCH'
RECEIVING
rv_cond = comm_bp_cond.
CONCATENATE '( ' branch_cond ' )' INTO branch_cond.
Branch_cond is like below:
(BRANCH = '1000000001' OR BRANCH = '1000000002')
Call AMDP Class:
TRY.
cl_admp_customer=>agedistribution(
EXPORTING
branch = branch_cond
IMPORTING
et_agedistribution = lt_agedistribution
).
CATCH cx_amdp_execution_failed INTO DATA(lxage).
ENDTRY.
3. Stored Procedure:
Update stored procedure of ‘ZOUI5_CV_CUSTOMERAGE_SQL’ to adopt filter
var_filter =
apply_filter("_SYS_BIC"."ZOUI5_CV_CUSTOMER",:ZBRANCH)
;
var_int =
select distinct
BRANCH,
CUSTOMER,
AGE
from
:var_filter;
Calculate Age distribution based on var_int.
Output is "AGE", "AGERATE"
In this blog I would like to show the basics of OData service development with SAP Gateway when using code based service implementation as it is shown in the upcoming SAP CodeJam events about OData service development with SAP Gateway.
Though the recommended approach for OData service development in the most recent SAP NetWeaver releases is to use CDS views there are still a number of use cases where code based service implementation is a valid option.
As an example we will use a simple OData model based that consist out of SalesOrders and SalesOrderItems with the option to navigate from a SalesOrder to the correponding line items.
The examples shown in this blog are based on the latest SAP NetWeaver release 750so that we can compare the different implementation approaches
We will perform the following steps:
Project: ZE2E100_XX
Description: ZE2E100_XX
Service Package: $TMP
Note: Replace XX with your session-id / group number.
Now you have created an (empty) service implementation in the SAP backend.
select * from sepm_i_salesorder_e into corresponding fields of table @et_entityset.
(Replace ‘<XX>’ by your group number.)
After pressing Execute button you see a list of sales orders.
data: lv_osql_where_clause type string.
lv_osql_where_clause = io_tech_request_context->get_osql_where_clause( ).
select * from sepm_i_salesorder_e
into corresponding fields of table @et_entityset
where (lv_osql_where_clause).
|
data: lv_osql_where_clause type string, |
What is now left is the implementation of the GET_ENTITY method of the entity set SalesOrderSet and the modelling and implementation of the navigation between sales order header and the line items.
This will be described in the second part of this blog post OData service development with SAP Gateway - code-based service development - Part II because the editor refused to let me add any additional screen shots at this point .