Planet Scala

Scala blogs aggregated

May 14, 2012

implicit.ly

shapeless 1.2.2

A minor release of shapeless. The main changes include,

  • Scrap Your Boilerplate improvements

    • Data and DataT instances for any type with an associated HListIso, allowing any case class to be traversed/transformed with just the addition of a single one-line implicit declaration
    • Data and DataT instances for pairs generalized to arbitrary tuples
    • Added a SYB example which also illustrates the gains from using HListIso
  • Generalized toHList to anything viewable as a GenTraversable[_].
  • Added HList toArray.
  • Added infrastructure for lifting monomorphic functions to Poly1 and extending Poly1 domains to be universal.
  • HList unify and toList now properly handle empty HLists.

shapeless is an exploration of type class and dependent type based generic programming in Scala.

A series of articles on the implementation techniques used will appear here and it also has a mailing list.

via herald

Permalink

May 14, 2012 01:26 PM

May 11, 2012

implicit.ly

scalatra 2.1.0.M2

core

  • Support X-Http-Method-Override header.
  • Support mounting handlers by class as well as instance.
  • Support ActionResult class hierarchy to bundle status, headers, and body into one case class.
  • Returning an Int sets the status, just like Sinatra.
  • CsrfTokenSupport recognizes the 'X-CSRF-Token' header.
  • Cross build dropped from artifact ID. The same build runs for all Scala 2.9.x.
  • Dropped support for Scala 2.8.x.

fileupload

  • Deprecated in favor of native servlet handling. (Jetty users: requires >= 8.1.3.)
  • Exceptions now handled through standard error handler.

swagger

  • Support for route documentation with Swagger

test

  • Support for testing multipart requests.

Scalatra is a tiny, Sinatra-like web framework for Scala.

Permalink

May 11, 2012 03:05 PM

Tony Morris

SIP-18 is just another bad idea serving nobody

SIP-18 is a bad idea because it makes awful assumptions about what is in the interest of language newcomers. I have had no shortage of unsolicited advice of how to teach, most of it in layers of wrongness, so I am acutely aware of the sheer quantity of this kind of advice. Please refrain for now. This is only my opinion, because it has been asked of me more than twice.

SIP-18 (and the Scala collections library for that matter) is no different to Haskell’s (dreaded) monomorphism restriction (DMR). The DMR was introduced specifically because of another chronically bold over-estimate of one’s ability to understand the process of learning. It is now an undesired language issue that hinders all users, especially newcomers. In other words, it serves nobody’s interest, hinders everyone’s interest and especially, the interest of those for whom it was meant to serve. You need only spend a short period of time with newcomers to Haskell to be overwhelmed by the prominence of this fact.

I can hear the pragmatists in the background muttering something about trade-offs, not being so extreme and keeping it relevant to the real world and blah blah blah, <insert the usual pragmatist bullshit here>. There is nothing to be traded off when you offer to take $5 from me for the low cost of $10. Now stop it and get your head out of the clouds so I can talk to you sensibly.

Not only does SIP-18 not help newcomers at all, it helps nobody, hinders everybody and especially newcomers. It is not a trade-off, it is not a good idea; it is simply a bold, severely misguided assertion about how learning takes place — it’s not even an approximation. I have seen only scant pseudo-psychology to support its existence, which obviates its predictable failure.

Hopefully, the Scala guys will work this out, but if Scala’s remarkable precision to repeat historical mistakes is anything to go by, I do not hold high hopes.

Thanks for asking.

by Tony Morris at May 11, 2012 03:12 AM

May 10, 2012

scala-lang.org

Scala IDE 2.1 Special Edition for 2.10.0-M3 available now!

A few days back the Scala IDE team released an early preview of the Scala IDE V2.1 for Eclipse, based on the new milestone (M3) for the upcoming Scala 2.10. This release has all the new features in the Scala IDE M1, plus a few minor changes needed in order to support 2.10.

You can see the Release Notes for M1 to check the new features in the Scala IDE, and the Scala Change Log for what’s new in Scala 2.10. Read more...

by dotta at May 10, 2012 06:02 AM

May 09, 2012

Jesper Nordenberg

My Take on Haskell vs Scala

I've used both Haskell and Scala for some time now. They are both excellent and beautifully designed functional programming languages and I thought it would be interesting to put together a little comparison of the two, and what parts I like and dislike in each one. To be honest I've spent much more time developing in Scala than Haskell, so if any Haskellers feel unfairly treated please write a comment and I will correct the text.

This is a highly subjective post, so it doesn't contain many references to sources. It helps if the reader is somewhat familiar with both languages.

With that said, let's start the comparison.

Syntax

When it comes to syntax Haskell wins hands down. The combination of using white space for function and type constructor application, and currying makes the code extremely clean, terse and easy to read. In comparison Scala code is full of distracting parenthesis, curly bracers, brackets, commas, keywords etc. I find that I've started using Haskell syntax when writing code on a white board to explain something to a colleague, which must be an indication of it's simplicity.

Type Inference

The type inference in Haskell just works, and does so reliably. The type inference in Scala is clearly worse, but not too bad considering the complexity of the type system. Most of the time it works quite well, but if you write slightly complicated code you will run into cases where it will fail. Scala also requires type annotations in function/method declarations, while Haskell doesn't. In general I think it's a good idea to put type annotations in your public API anyway, but for prototyping/experimentation it's very handy to not be required to write them.

Subtyping

By being Java compatible, Scala obviously has subtyping, while Haskell has not. Personally I'm on the fence whether subtyping is a good idea or not. On one hand it's quite natural and handy to be able specify that an interface is a subtype of some other interface, but on the other hand subtyping complicates type system and type inference greatly. I'm not sure the benefits outweighs the added complexity, and I don't think I'm the only one in doubt.

Modules vs Objects

The Haskell module system is very primitive, it's barely enough to get by with. In contrast Scala's object/module system is very powerful allowing objects to implement interfaces, mixin traits, bind abstract type members etc. This enables new and powerful ways to structure code, for example the cake pattern. Of course objects can also be passed around as values in the code. Objects as modules just feels natural IMHO.

Typeclasses vs Implicit Parameters

Typeclasses in Haskell and implicit parameters in Scala are used to solve basically the same problems, and in many ways they are very similar. However, I prefer Scala's solution as it gives the developer local, scoped control over which instances are available to the compiler. Scala also allows explicit instance argument passing at the call site which is sometimes useful. The idea that there is only one global type class instance for a given type feels too restricted, it also fits very badly with dynamic loading which I discuss in a later section. I also don't like that type class instances aren't values/objects/records in Haskell.

Scala has more advanced selection rules to determine which instance is considered most specific. This is useful in many cases. GHC doesn't allow overlapping instances unless a compiler flag is given, and even then the rules for choosing the most specific one are very restricted.

Lazy vs Strict Evaluation

Haskell has lazy evaluation by default, Scala has strict evaluation by default. I'm not a fan of lazy evaluation by default, perhaps because I've spent much time programming languages like assembly and C/C++ which are very close to the machine. I feel that to able to write efficient software you must have a good knowledge of how the execution machine works and for me lazy evaluation doesn't map well to that execution model. I'm simply not able to easily grasp the CPU and memory utilization of code which uses lazy evaluation. I understand the advantages of lazy evaluation, but being able to get predicable, consistent performance is a too important part of software development to be overlooked. I'm leaning more towards totality checking, as seen in newer languages like Idris, combined with advanced optimizations to get many of the benefits of lazy evaluation.

Type Safety

In Haskell side effects are controlled and checked by the compiler, and all functions are pure. In Scala any function/method can have hidden side effects. In addition, for Java compatibility the dreaded null value can be used for any user defined type (although it's discouraged), often resulting in NullPointerException. Also exceptions are used quite often in Java libraries and they are not checked by the Scala compiler possibly resulting in unhandled error conditions. Haskell is definitely a safer language to program in, but assuming some developer discipline Scala can be quite safe too.

Development Environment

The development environment for Scala is, while not on par with the Java's, quite nice with good Eclipse and IntelliJ plugins supporting code completion, browsing, instant error highlighting, API help and debugging. Haskell doesn't have anything that comes close to this (and no, Leksah is not there :-) ). And I don't buy the argument that you don't need a debugger for Haskell programs, especially for imperative code a good debugger is an invaluable tool. The developer experience in any language is much improved by a good IDE, and Scala is way ahead here.
Update: The EclipseFP Eclipse plugin for Haskell looks promising.

Runtime System

I like the JVM, it's a very nice environment to run applications in. Besides the state of the art code optimizer and garbage collectors, there are lots of tools available for monitoring, profiling, tuning, load balancing for the JVM that just works out of the box with Scala. Being able to easily load and unload code dynamically is also useful, for example in distributed computing.

Haskell has a much more static runtime system. I don't have much experience with runtime tools for Haskell, but my understanding it doesn't have the same amount of tools as the JVM.

Performance

The compilers for Haskell (GHC) and Scala (scalac) are very different. GHC performs quite complex optimizations on the code during compilation, while scalac mainly just outputs Java bytecode (with some minor optimizations) which is then dynamically compiled and optimized by Hotspot at runtime. Unfortunately things like value types and proper tailcall elimination, which are supported by GHC, are currently not implemented in the JVM. Hopefully this will be rectified in the future.

One thing Haskell and Scala have in common is that they both use garbage collection. While this is convenient in many cases and sometimes even necessary, it can often be a source of unnecessary overhead and application latency. It also gives the developer very little control over the memory usage of an application. GHC supports annotation of types to control boxing, and Hotspot does a limited form of escape analysis to eliminate heap allocation in some cases. However there is much room for improvement in both environments when it comes to eliminating heap allocations. Much can be learned from languages like Rust, BitC, ATS and even C++. For some applications it's critical to be able to have full control over memory management, and it's very nice to have that in a high level language.

Final Words

Haskell and Scala are both very powerful and practical programming languages, but with different strengths and weaknesses. Neither language is perfect and I think there is room for improvements in both. I will definitely continue to use both of them, at least until something better shows up. Some new interesting ones on my radar are Rust, ATS and Idris.

by Jesper Nordenberg (noreply@blogger.com) at May 09, 2012 10:18 PM

May 08, 2012

implicit.ly

ScalesXml 0.3-RC6

This is a feature release, adding a the full set of useful XPath Axe, string based XPath evaluation - via a popular open source XPath library, useful equality testing, a lot of new documentation and many smaller improvements in syntax and usability.

This version has been built with xsbt 0.11.x and migrated to github. This releases documentation can be found here and provides many examples on how to use Scales Xml.

All the Axe you'll ever need

Scales 0.3 adds the following axe:

  • preceding-sibling (preceding_sibling_::)
  • following-sibling (following_sibling_::)
  • descendant (descendant_::)
  • following (following_::)
  • preceding (preceding_::)
  • ancestor (ancestor_::)
  • ancestor-or-self (ancestor_or_self_::)
  • descendant-or-self (descendant_or_self_::)

This provides all of the XPath 1 and 2 axe (namespace axis excluded).

Enhanced internal XPath queries

  • position() (pos)

    pos\_<, pos\_==, pos\_>
  • last() (last)

    last\_<, last\_==, last\_>
  • position() == last() (pos_eq_last)
  • Easier to extend and re-use queries and axe

    xfilter, xmap, xflatMap

String base XPath evaluation

  • Evaluate normal XPath 1.0 strings to XmlPaths
  • Evaluates to an Iterable[Either[AttributePath, XmlPath]] or,
  • get[T] a value directly from XPath 1.0 (e.g. get[String]("normalize(//*)"))
  • Allows querying for the few predicates and XPaths that Scales cannot process (and dynamic queries of course)
  • Optional dependency

New xpath.Functions

  • Unified interface for XPath function handling
  • text, QName and Boolean typeclasses
  • Implementations for all relevant Scales Xml types

    • string( attribute ) makes sense whilst string( QName ) does not

New XmlComparison framework (2.9.x only)

  • Compare Xml structures and types
  • Customisable comparison rules
  • Default Scalaz === Equal type classes
  • XmlComparison type classes provide full details of differences:

    • A QName based path to the difference
    • The objects which are different

Extra Fun

  • Forwards and Backwards Path Iterators (used by following and preceding)
  • DuplicateFilter now works with the Scalaz Equal typeclass
  • Using AttributeQNames with the tuple arrow now creates QNames as you'd expect
  • DslBuilder allows direct manipulation of trees via folding
  • Simplify the Builder usage why /(<(Elem(... ) when you can just /(Elem)?
  • Java 1.7 JAXP implementation checks - Schema validation is optimised (no serialization)

How To Use

Scales 0.3 moves to Sonatype under the organisation org.scalesxml, with support for 2.8.1, 2.8.2, 2.9.1 and 2.9.2. As such add:

libraryDependencies ++= Seq(
  // just for the core library
  "org.scalesxml" %% "scales-xml" % "0.3-RC6"
  // or, use this instead for String based XPaths (Jaxen, also includes the core)
  "org.scalesxml" %% "scales-jaxen" % "0.3-RC6"
)

to your xsbt builds or use scales-xml_2.9.2 as the id when using Maven.

Scales Xml is an alternate xml library for Scala providing a coherent model, querying and manipulation via an XPath like syntax, better performance, highly customisable equality framework and an Iteratee based pull api.

via herald

Permalink

May 08, 2012 05:54 AM

sbt-buildinfo 0.1.2

Minor enhancements

  • sbt 0.11.3
  • Adds buildInfoBuildNumber. #2

sbt-buildinfo is a plug-in for sbt to generate Scala source from your build definitions.

via herald

Permalink

May 08, 2012 03:14 AM

sbt-assembly 0.8.1

minor enhancements

  • sbt 0.11.3
  • more default exclusions and merges contributed by @rjmac

sbt-assembly is a plug-in for Simple Build Tool that creates a single jar of your project including all of its dependencies.

via herald

Permalink

May 08, 2012 02:54 AM

May 07, 2012

implicit.ly

scalariform 0.1.2

  • Revamp command-line tool with more intuitive behaviour
  • Add --quiet, --recurse, --stdin, --stdout options to command-line tool
  • FIX: Scaladoc comment formatting could break nested comments (issue #36)
  • Tidy up, optimise lexer code
  • FIX: Parse 5.f, 5.d as floating points, unless in Scala 2.11+ mode
  • FIX: Bug with line-per-annotation style
  • Add support for String interpolation
  • Add support for macros
  • Add --scalaVersion=<version> flag to command-line tool
  • Support expr[T1, T2][T3, T4] and g()[String] syntaxes
  • Fix AST selection for prefix expression

Scalariform is a source code formatter for Scala.

via herald

Permalink

May 07, 2012 09:20 PM

Coderspiel

Adam Hinz & George Adams: Dispatch, Unfiltered, Blue Eyes, & Play: A Comparison - PHASE: Philly Area Scala Enthusiasts (Philadelphia, PA) - Meetup

Adam Hinz & George Adams: Dispatch, Unfiltered, Blue Eyes, & Play: A Comparison - PHASE: Philly Area Scala Enthusiasts (Philadelphia, PA) - Meetup

May 07, 2012 03:18 PM

"A good developer has a natural, almost visceral aversion to complexity."

“A good developer has a natural, almost visceral aversion to complexity.”

- Complication is What Happens When You Try to Solve a Problem You Don’t Understand (via bdarfler)

May 07, 2012 02:11 PM

implicit.ly

Configrity 0.10.1

Small maintenance release, including:

  • When loading a configuration from the classpath, a FileNotFoundException is thrown (issue #8) -- Martin Konicek
  • Looking for a non existing key will throw a NoSuchElementException with a message clearly refering to the missing key (issue #11) -- Jussi Virtanen
  • List values are sanitized by adding quotes when needed (issue #12)
  • Artefacts for Scala 2.9.2

If you wish for extra features, feel free to ask.

Configrity is a simple, immutable and flexible Scala API for handling configurations.

via herald

Permalink

May 07, 2012 08:02 AM

unfiltered 0.6.2

New Features

Multipart POST support for Netty

This substantial contribution by g-eorge handles file of uploads of arbitrary size, as previously supported only for filter plans. See the netty-uploads readme for details.

Migration Note: If you were using the unfiltered-uploads module before, you should now depend on unfiltered-filter-uploads. The former now serves as a base implementation for the filter and netty upload modules.

Kits

  • unfiltered.kit.Secure redirects HTTP requests to HTTPS.
  • unfiltered.kit.Auth requires basic auth for matching requests.
  • unfiltered.kit.AsyncCycle — removed this promise/future-aware kit for the time being

Extractors

  • unfiltered.request.QueryParams allows access to query-string parameters exclusively, doesn't read in the request body for POST params
  • unfiltered.request.Charset The existing extractor yielded both the charset string as well as an HttpRequest object in the older fashion of Unfiltered. This is altered to now only yield the charset, which is a breaking change. If you see a compilation error for a Charset matcher, simply remove the trailing prameter in its parameter list.
  • unfiltered.request.{Accept,AcceptCharset,AcceptEncoding,AcceptLanguage} all behave more correctly thanks to hamnis's content negotiation fixes.

Response Functions

  • Support for rfc6585, additional status codes

Fixes

  • Issue 110 Keymanagers loaded redundantly for Netty bindings
  • Issue 111 TLS contexts created redudantly for Netty bindings
  • Issue 119 NoSuchElementException for parameterValues
  • Issue 123 Find path suffix only from path part of uri
  • Issue 126 Tiny fix in url generation for Jetty Https Server

Unfiltered is a toolkit for servicing HTTP requests in Scala.

via herald

Permalink

May 07, 2012 04:47 AM

May 06, 2012

Quoi qu'il en soit

Java, PowerMock and the slow death of pointless Interfaces


Back in the day, say around 2000, the use of Java interfaces were pushed as the one true way (tm) for expressing dependecies between classes. The established wisdom was that if one class needs another then it should be expressed as a dependency on an interface. There are two advantages to expressing dependencies via interfaces: (1) you can have a test implementation of the interface so you can unit test a class without using the real dependency (2) you can have multiple implementations of the interface, which might be chosen at runtime. In practice, (2) happens quite rarely, but  remains a completely valid case for interfaces use.


And so, the wisdom went, you were condemned to eternal damnation called a static method on another class. A call to a static method is hard wired like concrete and steel. No way to stub it out for unit testing.


Enter PowerMock in about 2008/2009 which works with EasyMock or Mockito and which allows you to mock pretty much anything:


"PowerMock is a framework that extend other mock libraries such as EasyMock with more powerful capabilities. PowerMock uses a custom classloader and bytecode manipulation to enable mocking of static methods, constructors, final classes and methods, private methods, removal of static initializers and more. By using a custom classloader no changes need to be done to the IDE or continuous integration servers which simplifies adoption. Developers familiar with the supported mock frameworks will find PowerMock easy to use, since the entire expectation API is the same, both for static methods and constructors. PowerMock aims to extend the existing API's with a small number of methods and annotations to enable the extra features. Currently PowerMock supports EasyMock and Mockito."


I have seen PoweMock used a lot in several organizations. It just works(tm). I have noticed that it simplifies the way people write code. 


So, with PowerMock in hand, here is some advice for writing Java, that goes against established wisdom.


1) Don't write to interfaces unless you really need multiple implementations! Why create an interface and a class when just a class will do? If you find you really need an interface later then create one and use it. But remember most of the time, YAGNI for unit testing thanks to PowerMock. (where YAGNI means "you ain't gonna need interfaces" as opposed to the more traditional "you ain't gonna need it".)


2) Use EasyMock or Mockito for unit testing and the extras that PowerMock gives you if you need to. (I have nothing against JMock, and perhaps JMock has the equivalent features that PowerMock provides. )


3) Do not be afraid to use static methods if appropriate. When is appropriate? Now there's a question Rich Hickey would be happy to answer. 


Thanks to PowerMock, we are free to use interfaces where they are really needed.





by Tim Azzopardi (noreply@blogger.com) at May 06, 2012 10:17 PM

Coderspiel

flatMap Oslo (May 15, 16) is Norway’s first Scala...

spacer

flatMap Oslo (May 15, 16) is Norway’s first Scala conference, and the first anywhere to have such exciting light fixtures. (Good speaker lineup, too.)

May 06, 2012 02:39 PM

May 05, 2012

Quoi qu'il en soit

Examples of Java calling Oracle PLSQL anonymous blocks


Why would you do this? Answer: Developement agility

An Oracle DBA might say that Java should not use anonymous plsql blocks as (a) it embeds embeds database logic into Java code,  and (b) is bad for performance as a stored procedure would have a precompiled execution plan.

But in the organization where I am currently consulting:
  •  iBatis and hibernate (arguably)  embed database logic into Java applications. In theory its done in a "portable" way that is not tied to the database implementation. Like thats ever going to change!
  • Logistically and bureaucratically, it takes weeks to get a packaged stored procedure created and installed. In my experience this is typical of most large organizations that separate Java developers from database developers and dbas. The human communication in itself between the teams, creates a bottleneck.
  • The PLSQL blocks are stored in seperate files and loaded from files. Database gurus tweak the SQL and hand it over for complex queries and updates. 
  • Performance is actually not bad, because Oracle bind variables are used in the plsql. This means that oracle sees the same text every time and reuses execution plans.
  • Over time, if found to be durable, the PLSQL can be converted to a stored procedure and the anonmous plsql files are replaced with simple procedure calls.


Example 1: Call an anonymous PLSQL Block with one input string and one output string parameter :
  
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
import java.sql.Types;

public class CallPLSQLBlockWithOneInputStringAndOneOutputStringParameter {

// Warning: this is a simple example program : In a long running application,
// exception handlers MUST clean up connections statements and result sets.
public static void main(String[] args) throws SQLException {

DriverManager.registerDriver(new oracle.jdbc.OracleDriver());

final Connection c = DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:XE", "system", "manager");
String plsql = "" +
" declare " +
" p_id varchar2(20) := null; " +
" begin " +
" p_id := ?; " +
" ? := 'input parameter was = ' || p_id;" +
" end;";
CallableStatement cs = c.prepareCall(plsql);
cs.setString(1, "12345");
cs.registerOutParameter(2, Types.VARCHAR);
cs.execute();

System.out.println("Output parameter was = '" + cs.getObject(2) + "'");

cs.close();
c.close();
}
}
Java: Call an anonymous PLSQL Block with one input string and one output string parameter and one output cursor (query result) parameter :
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Types;

import oracle.jdbc.OracleTypes;

public class CallPLSQLBlockWithOneInputStringAndOneOutputStringParameterAndOneOutputCursorParameter {

public static void main(String[] args) throws Exception {

DriverManager.registerDriver(new oracle.jdbc.OracleDriver());

final Connection c = DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:XE", "system", "manager");
String plsql = "" +
" declare " +
" p_id varchar2(20) := null; " +
" l_rc sys_refcursor;" +
" begin " +
" p_id := ?; " +
" ? := 'input parameter was = ' || p_id;" +
" open l_rc for " +
" select 1 id, 'hello' name from dual " +
" union " +
" select 2, 'peter' from dual; " +
" ? := l_rc;" +
" end;";

CallableStatement cs = c.prepareCall(plsql);
cs.setString(1, "12345");
cs.registerOutParameter(2, Types.VARCHAR);
cs.registerOutParameter(3, OracleTypes.CURSOR);

cs.execute();

System.out.println("Result = " + cs.getObject(2));

ResultSet cursorResultSet = (ResultSet) cs.getObject(3);
while (cursorResultSet.next ())
{
System.out.println (cursorResultSet.getInt(1) + " " + cursorResultSet.getString(2));
}
cs.close();
c.close();
}
}  

Example:  Call an anonymous PLSQL Block with one input string array and one output string parameter and one output cursor (query result) parameter :

import java.sql.Array;
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Types;

import oracle.jdbc.OracleTypes;
import oracle.sql.ARRAY;
import oracle.sql.ArrayDescriptor;

public class CallPLSQLBlockWithOneInputStringArrayAndOneOutputStringParameterAndOneOutputCursorParameter {

public static void main(String[] args) throws Exception {

DriverManager.registerDriver(new oracle.jdbc.OracleDriver());

// Warning: this is a simple example program : In a long running application,
// error handlers MUST clean up connections statements and result sets.

final Connection c = DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:XE", "system", "manager");
String plsql = "" +
" declare " +
" p_id string_array := null; " +
" l_rc sys_refcursor;" +
" begin " +
" p_id := ?; " +
" ? := 'input parameter first element was = ' || p_id(1);" +
" open l_rc for select * from table(p_id) ; " +
" ? := l_rc;" +
" end;";

String[] stringArray = new String[]{ "mathew", "mark"};

// MUST CREATE THIS IN ORACLE BEFORE RUNNING
System.out.println("(This should be done once in Oracle)");
c.createStatement().execute("create or replace type string_array is table of varchar2(32)");

ArrayDescriptor descriptor = ArrayDescriptor.createDescriptor( "STRING_ARRAY", c );

Array array_to_pass = new ARRAY( descriptor, c, stringArray );

CallableStatement cs = c.prepareCall(plsql);
cs.setArray( 1, array_to_pass );
cs.registerOutParameter(2, Types.VARCHAR);
cs.registerOutParameter(3, OracleTypes.CURSOR);

cs.execute();

System.out.println("Result = " + cs.getObject(2));

ResultSet cursorResultSet = (ResultSet) cs.getObject(3);
while (cursorResultSet.next ())
{
System.out.println (cursorResultSet.getString(1));
}
cs.close();
c.close();
}
}

Example: Call an anonymous PLSQL Block with one input structure array and one output string parameter and one output cursor (query result) parameter :

import java.sql.Array;
import java.sql.CallableStatement;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Types;

import oracle.jdbc.OracleTypes;
import oracle.sql.ARRAY;
import oracle.sql.ArrayDescriptor;
import oracle.sql.STRUCT;
import oracle.sql.StructDescriptor;

public class CallPLSQLBlockWithOneInputStructureArrayAndOneOutputStringParameterAndOneOutputCursorParameter {

public static void main(String[] args) throws Exception {

DriverManager.registerDriver(new oracle.jdbc.OracleDriver());

// Warning: this is a simple example program : In a long running application,
// error handlers MUST clean up connections statements and result sets.

final Connection c = DriverManager.getConnection("jdbc:oracle:thin:@localhost:1521:XE", "system", "manager");
String plsql = "" +
" declare " +
" p_id student_array := null; " +
" l_rc sys_refcursor;" +
" begin " +
" p_id := ?; " +
" ? := 'input parameter first element was = (' || p_id(1).id_num || ', ' || p_id(1).name || ')'; " +
" open l_rc for select * from table(p_id) ; " +
" ? := l_rc;" +
" end;";


// MUST CREATE ORACLE TYPES BEFORE RUNNING
setupOracleTypes(c);

StructDescriptor structDescr = StructDescriptor.createDescriptor("STUDENT", c);
STRUCT s1struct = new STRUCT(structDescr, c, new Object[]{1, "mathew"});
STRUCT s2struct = new STRUCT(structDescr, c, new Object[]{2, "mark"});
ArrayDescriptor arrayDescr = ArrayDescriptor.createDescriptor( "STUDENT_ARRAY", c );
Array array_to_pass = new ARRAY( arrayDescr, c, new Object[]{s1struct, s2struct} );

CallableStatement cs = c.prepareCall(plsql);
cs.setArray( 1, array_to_pass );
cs.registerOutParameter(2, Types.VARCHAR);
cs.registerOutParameter(3, OracleTypes.CURSOR);

cs.execute();

System.out.println("Result = " + cs.getObject(2));

ResultSet cursorResultSet = (R
gipoco.com is neither affiliated with the authors of this page nor responsible for its contents. This is a safe-cache copy of the original web site.